Nov 22 04:42:27 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 22 04:42:27 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 22 04:42:27 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 04:42:27 localhost kernel: BIOS-provided physical RAM map:
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 22 04:42:27 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 22 04:42:27 localhost kernel: NX (Execute Disable) protection: active
Nov 22 04:42:27 localhost kernel: APIC: Static calls initialized
Nov 22 04:42:27 localhost kernel: SMBIOS 2.8 present.
Nov 22 04:42:27 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 22 04:42:27 localhost kernel: Hypervisor detected: KVM
Nov 22 04:42:27 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 22 04:42:27 localhost kernel: kvm-clock: using sched offset of 5046684261 cycles
Nov 22 04:42:27 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 22 04:42:27 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 22 04:42:27 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 22 04:42:27 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 22 04:42:27 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 22 04:42:27 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 22 04:42:27 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 22 04:42:27 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 22 04:42:27 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 22 04:42:27 localhost kernel: Using GB pages for direct mapping
Nov 22 04:42:27 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 22 04:42:27 localhost kernel: ACPI: Early table checksum verification disabled
Nov 22 04:42:27 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 22 04:42:27 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 04:42:27 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 04:42:27 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 04:42:27 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 22 04:42:27 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 04:42:27 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 04:42:27 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 22 04:42:27 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 22 04:42:27 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 22 04:42:27 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 22 04:42:27 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 22 04:42:27 localhost kernel: No NUMA configuration found
Nov 22 04:42:27 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 22 04:42:27 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 22 04:42:27 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 22 04:42:27 localhost kernel: Zone ranges:
Nov 22 04:42:27 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 22 04:42:27 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 22 04:42:27 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 04:42:27 localhost kernel:   Device   empty
Nov 22 04:42:27 localhost kernel: Movable zone start for each node
Nov 22 04:42:27 localhost kernel: Early memory node ranges
Nov 22 04:42:27 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 22 04:42:27 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 22 04:42:27 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 04:42:27 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 22 04:42:27 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 22 04:42:27 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 22 04:42:27 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 22 04:42:27 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 22 04:42:27 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 22 04:42:27 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 22 04:42:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 22 04:42:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 22 04:42:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 22 04:42:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 22 04:42:27 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 22 04:42:27 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 22 04:42:27 localhost kernel: TSC deadline timer available
Nov 22 04:42:27 localhost kernel: CPU topo: Max. logical packages:   8
Nov 22 04:42:27 localhost kernel: CPU topo: Max. logical dies:       8
Nov 22 04:42:27 localhost kernel: CPU topo: Max. dies per package:   1
Nov 22 04:42:27 localhost kernel: CPU topo: Max. threads per core:   1
Nov 22 04:42:27 localhost kernel: CPU topo: Num. cores per package:     1
Nov 22 04:42:27 localhost kernel: CPU topo: Num. threads per package:   1
Nov 22 04:42:27 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 22 04:42:27 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 22 04:42:27 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 22 04:42:27 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 22 04:42:27 localhost kernel: Booting paravirtualized kernel on KVM
Nov 22 04:42:27 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 22 04:42:27 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 22 04:42:27 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 22 04:42:27 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 22 04:42:27 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 22 04:42:27 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 22 04:42:27 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 04:42:27 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 22 04:42:27 localhost kernel: random: crng init done
Nov 22 04:42:27 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 22 04:42:27 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 22 04:42:27 localhost kernel: Fallback order for Node 0: 0 
Nov 22 04:42:27 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 22 04:42:27 localhost kernel: Policy zone: Normal
Nov 22 04:42:27 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 22 04:42:27 localhost kernel: software IO TLB: area num 8.
Nov 22 04:42:27 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 22 04:42:27 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 22 04:42:27 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 22 04:42:27 localhost kernel: Dynamic Preempt: voluntary
Nov 22 04:42:27 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 22 04:42:27 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 22 04:42:27 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 22 04:42:27 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 22 04:42:27 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 22 04:42:27 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 22 04:42:27 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 22 04:42:27 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 22 04:42:27 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 04:42:27 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 04:42:27 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 04:42:27 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 22 04:42:27 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 22 04:42:27 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 22 04:42:27 localhost kernel: Console: colour VGA+ 80x25
Nov 22 04:42:27 localhost kernel: printk: console [ttyS0] enabled
Nov 22 04:42:27 localhost kernel: ACPI: Core revision 20230331
Nov 22 04:42:27 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 22 04:42:27 localhost kernel: x2apic enabled
Nov 22 04:42:27 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 22 04:42:27 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 22 04:42:27 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 22 04:42:27 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 22 04:42:27 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 22 04:42:27 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 22 04:42:27 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 22 04:42:27 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 22 04:42:27 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 22 04:42:27 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 22 04:42:27 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 22 04:42:27 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 22 04:42:27 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 22 04:42:27 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 22 04:42:27 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 22 04:42:27 localhost kernel: x86/bugs: return thunk changed
Nov 22 04:42:27 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 22 04:42:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 22 04:42:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 22 04:42:27 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 22 04:42:27 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 22 04:42:27 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 22 04:42:27 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 22 04:42:27 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 22 04:42:27 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 22 04:42:27 localhost kernel: landlock: Up and running.
Nov 22 04:42:27 localhost kernel: Yama: becoming mindful.
Nov 22 04:42:27 localhost kernel: SELinux:  Initializing.
Nov 22 04:42:27 localhost kernel: LSM support for eBPF active
Nov 22 04:42:27 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 04:42:27 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 04:42:27 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 22 04:42:27 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 22 04:42:27 localhost kernel: ... version:                0
Nov 22 04:42:27 localhost kernel: ... bit width:              48
Nov 22 04:42:27 localhost kernel: ... generic registers:      6
Nov 22 04:42:27 localhost kernel: ... value mask:             0000ffffffffffff
Nov 22 04:42:27 localhost kernel: ... max period:             00007fffffffffff
Nov 22 04:42:27 localhost kernel: ... fixed-purpose events:   0
Nov 22 04:42:27 localhost kernel: ... event mask:             000000000000003f
Nov 22 04:42:27 localhost kernel: signal: max sigframe size: 1776
Nov 22 04:42:27 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 22 04:42:27 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 22 04:42:27 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 22 04:42:27 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 22 04:42:27 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 22 04:42:27 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 22 04:42:27 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 22 04:42:27 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 22 04:42:27 localhost kernel: Memory: 7765988K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616268K reserved, 0K cma-reserved)
Nov 22 04:42:27 localhost kernel: devtmpfs: initialized
Nov 22 04:42:27 localhost kernel: x86/mm: Memory block size: 128MB
Nov 22 04:42:27 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 22 04:42:27 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 22 04:42:27 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 22 04:42:27 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 22 04:42:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 22 04:42:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 22 04:42:27 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 22 04:42:27 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 22 04:42:27 localhost kernel: audit: type=2000 audit(1763786546.133:1): state=initialized audit_enabled=0 res=1
Nov 22 04:42:27 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 22 04:42:27 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 22 04:42:27 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 22 04:42:27 localhost kernel: cpuidle: using governor menu
Nov 22 04:42:27 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 22 04:42:27 localhost kernel: PCI: Using configuration type 1 for base access
Nov 22 04:42:27 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 22 04:42:27 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 22 04:42:27 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 22 04:42:27 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 22 04:42:27 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 22 04:42:27 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 22 04:42:27 localhost kernel: Demotion targets for Node 0: null
Nov 22 04:42:27 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 22 04:42:27 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 22 04:42:27 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 22 04:42:27 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 22 04:42:27 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 22 04:42:27 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 22 04:42:27 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 22 04:42:27 localhost kernel: ACPI: Interpreter enabled
Nov 22 04:42:27 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 22 04:42:27 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 22 04:42:27 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 22 04:42:27 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 22 04:42:27 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 22 04:42:27 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 22 04:42:27 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [3] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [4] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [5] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [6] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [7] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [8] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [9] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [10] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [11] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [12] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [13] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [14] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [15] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [16] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [17] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [18] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [19] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [20] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [21] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [22] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [23] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [24] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [25] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [26] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [27] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [28] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [29] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [30] registered
Nov 22 04:42:27 localhost kernel: acpiphp: Slot [31] registered
Nov 22 04:42:27 localhost kernel: PCI host bridge to bus 0000:00
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 22 04:42:27 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 22 04:42:27 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 22 04:42:27 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 04:42:27 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 22 04:42:27 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 22 04:42:27 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 22 04:42:27 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 22 04:42:27 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 22 04:42:27 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 22 04:42:27 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 22 04:42:27 localhost kernel: iommu: Default domain type: Translated
Nov 22 04:42:27 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 22 04:42:27 localhost kernel: SCSI subsystem initialized
Nov 22 04:42:27 localhost kernel: ACPI: bus type USB registered
Nov 22 04:42:27 localhost kernel: usbcore: registered new interface driver usbfs
Nov 22 04:42:27 localhost kernel: usbcore: registered new interface driver hub
Nov 22 04:42:27 localhost kernel: usbcore: registered new device driver usb
Nov 22 04:42:27 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 22 04:42:27 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 22 04:42:27 localhost kernel: PTP clock support registered
Nov 22 04:42:27 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 22 04:42:27 localhost kernel: NetLabel: Initializing
Nov 22 04:42:27 localhost kernel: NetLabel:  domain hash size = 128
Nov 22 04:42:27 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 22 04:42:27 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 22 04:42:27 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 22 04:42:27 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 22 04:42:27 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 22 04:42:27 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 22 04:42:27 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 22 04:42:27 localhost kernel: vgaarb: loaded
Nov 22 04:42:27 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 22 04:42:27 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 22 04:42:27 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 22 04:42:27 localhost kernel: pnp: PnP ACPI init
Nov 22 04:42:27 localhost kernel: pnp 00:03: [dma 2]
Nov 22 04:42:27 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 22 04:42:27 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 22 04:42:27 localhost kernel: NET: Registered PF_INET protocol family
Nov 22 04:42:27 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 22 04:42:27 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 22 04:42:27 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 22 04:42:27 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 22 04:42:27 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 22 04:42:27 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 22 04:42:27 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 22 04:42:27 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 04:42:27 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 04:42:27 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 22 04:42:27 localhost kernel: NET: Registered PF_XDP protocol family
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 22 04:42:27 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 22 04:42:27 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 22 04:42:27 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 22 04:42:27 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 90944 usecs
Nov 22 04:42:27 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 22 04:42:27 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 22 04:42:27 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 22 04:42:27 localhost kernel: ACPI: bus type thunderbolt registered
Nov 22 04:42:27 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 22 04:42:27 localhost kernel: Initialise system trusted keyrings
Nov 22 04:42:27 localhost kernel: Key type blacklist registered
Nov 22 04:42:27 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 22 04:42:27 localhost kernel: zbud: loaded
Nov 22 04:42:27 localhost kernel: integrity: Platform Keyring initialized
Nov 22 04:42:27 localhost kernel: integrity: Machine keyring initialized
Nov 22 04:42:27 localhost kernel: Freeing initrd memory: 85868K
Nov 22 04:42:27 localhost kernel: NET: Registered PF_ALG protocol family
Nov 22 04:42:27 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 22 04:42:27 localhost kernel: Key type asymmetric registered
Nov 22 04:42:27 localhost kernel: Asymmetric key parser 'x509' registered
Nov 22 04:42:27 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 22 04:42:27 localhost kernel: io scheduler mq-deadline registered
Nov 22 04:42:27 localhost kernel: io scheduler kyber registered
Nov 22 04:42:27 localhost kernel: io scheduler bfq registered
Nov 22 04:42:27 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 22 04:42:27 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 22 04:42:27 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 22 04:42:27 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 22 04:42:27 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 22 04:42:27 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 22 04:42:27 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 22 04:42:27 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 22 04:42:27 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 22 04:42:27 localhost kernel: Non-volatile memory driver v1.3
Nov 22 04:42:27 localhost kernel: rdac: device handler registered
Nov 22 04:42:27 localhost kernel: hp_sw: device handler registered
Nov 22 04:42:27 localhost kernel: emc: device handler registered
Nov 22 04:42:27 localhost kernel: alua: device handler registered
Nov 22 04:42:27 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 22 04:42:27 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 22 04:42:27 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 22 04:42:27 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 22 04:42:27 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 22 04:42:27 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 22 04:42:27 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 22 04:42:27 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 22 04:42:27 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 22 04:42:27 localhost kernel: hub 1-0:1.0: USB hub found
Nov 22 04:42:27 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 22 04:42:27 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 22 04:42:27 localhost kernel: usbserial: USB Serial support registered for generic
Nov 22 04:42:27 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 22 04:42:27 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 22 04:42:27 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 22 04:42:27 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 22 04:42:27 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 22 04:42:27 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 22 04:42:27 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 22 04:42:27 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 22 04:42:27 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T04:42:26 UTC (1763786546)
Nov 22 04:42:27 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 22 04:42:27 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 22 04:42:27 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 22 04:42:27 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 22 04:42:27 localhost kernel: usbcore: registered new interface driver usbhid
Nov 22 04:42:27 localhost kernel: usbhid: USB HID core driver
Nov 22 04:42:27 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 22 04:42:27 localhost kernel: Initializing XFRM netlink socket
Nov 22 04:42:27 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 22 04:42:27 localhost kernel: Segment Routing with IPv6
Nov 22 04:42:27 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 22 04:42:27 localhost kernel: mpls_gso: MPLS GSO support
Nov 22 04:42:27 localhost kernel: IPI shorthand broadcast: enabled
Nov 22 04:42:27 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 22 04:42:27 localhost kernel: AES CTR mode by8 optimization enabled
Nov 22 04:42:27 localhost kernel: sched_clock: Marking stable (1241001383, 150433338)->(1502009563, -110574842)
Nov 22 04:42:27 localhost kernel: registered taskstats version 1
Nov 22 04:42:27 localhost kernel: Loading compiled-in X.509 certificates
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 22 04:42:27 localhost kernel: Demotion targets for Node 0: null
Nov 22 04:42:27 localhost kernel: page_owner is disabled
Nov 22 04:42:27 localhost kernel: Key type .fscrypt registered
Nov 22 04:42:27 localhost kernel: Key type fscrypt-provisioning registered
Nov 22 04:42:27 localhost kernel: Key type big_key registered
Nov 22 04:42:27 localhost kernel: Key type encrypted registered
Nov 22 04:42:27 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 22 04:42:27 localhost kernel: Loading compiled-in module X.509 certificates
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 04:42:27 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 22 04:42:27 localhost kernel: ima: No architecture policies found
Nov 22 04:42:27 localhost kernel: evm: Initialising EVM extended attributes:
Nov 22 04:42:27 localhost kernel: evm: security.selinux
Nov 22 04:42:27 localhost kernel: evm: security.SMACK64 (disabled)
Nov 22 04:42:27 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 22 04:42:27 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 22 04:42:27 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 22 04:42:27 localhost kernel: evm: security.apparmor (disabled)
Nov 22 04:42:27 localhost kernel: evm: security.ima
Nov 22 04:42:27 localhost kernel: evm: security.capability
Nov 22 04:42:27 localhost kernel: evm: HMAC attrs: 0x1
Nov 22 04:42:27 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 22 04:42:27 localhost kernel: Running certificate verification RSA selftest
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 22 04:42:27 localhost kernel: Running certificate verification ECDSA selftest
Nov 22 04:42:27 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 22 04:42:27 localhost kernel: clk: Disabling unused clocks
Nov 22 04:42:27 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 22 04:42:27 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 22 04:42:27 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 22 04:42:27 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 22 04:42:27 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 22 04:42:27 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 22 04:42:27 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 22 04:42:27 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 22 04:42:27 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 22 04:42:27 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 22 04:42:27 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 22 04:42:27 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 22 04:42:27 localhost kernel: Run /init as init process
Nov 22 04:42:27 localhost kernel:   with arguments:
Nov 22 04:42:27 localhost kernel:     /init
Nov 22 04:42:27 localhost kernel:   with environment:
Nov 22 04:42:27 localhost kernel:     HOME=/
Nov 22 04:42:27 localhost kernel:     TERM=linux
Nov 22 04:42:27 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 22 04:42:27 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 04:42:27 localhost systemd[1]: Detected virtualization kvm.
Nov 22 04:42:27 localhost systemd[1]: Detected architecture x86-64.
Nov 22 04:42:27 localhost systemd[1]: Running in initrd.
Nov 22 04:42:27 localhost systemd[1]: No hostname configured, using default hostname.
Nov 22 04:42:27 localhost systemd[1]: Hostname set to <localhost>.
Nov 22 04:42:27 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 22 04:42:27 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 22 04:42:27 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 04:42:27 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 04:42:27 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 22 04:42:27 localhost systemd[1]: Reached target Local File Systems.
Nov 22 04:42:27 localhost systemd[1]: Reached target Path Units.
Nov 22 04:42:27 localhost systemd[1]: Reached target Slice Units.
Nov 22 04:42:27 localhost systemd[1]: Reached target Swaps.
Nov 22 04:42:27 localhost systemd[1]: Reached target Timer Units.
Nov 22 04:42:27 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 04:42:27 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 22 04:42:27 localhost systemd[1]: Listening on Journal Socket.
Nov 22 04:42:27 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 04:42:27 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 04:42:27 localhost systemd[1]: Reached target Socket Units.
Nov 22 04:42:27 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 04:42:27 localhost systemd[1]: Starting Journal Service...
Nov 22 04:42:27 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 04:42:27 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 04:42:27 localhost systemd[1]: Starting Create System Users...
Nov 22 04:42:27 localhost systemd[1]: Starting Setup Virtual Console...
Nov 22 04:42:27 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 04:42:27 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 04:42:27 localhost systemd[1]: Finished Create System Users.
Nov 22 04:42:27 localhost systemd-journald[309]: Journal started
Nov 22 04:42:27 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/66851c39840f46c8adfc77dc6a7d91a4) is 8.0M, max 153.6M, 145.6M free.
Nov 22 04:42:27 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Nov 22 04:42:27 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Nov 22 04:42:27 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 22 04:42:27 localhost systemd[1]: Started Journal Service.
Nov 22 04:42:27 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 04:42:27 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 04:42:27 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 04:42:27 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 04:42:27 localhost systemd[1]: Finished Setup Virtual Console.
Nov 22 04:42:27 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 22 04:42:27 localhost systemd[1]: Starting dracut cmdline hook...
Nov 22 04:42:27 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Nov 22 04:42:27 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 04:42:27 localhost systemd[1]: Finished dracut cmdline hook.
Nov 22 04:42:27 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 22 04:42:27 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 22 04:42:27 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 22 04:42:27 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 22 04:42:27 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 22 04:42:27 localhost kernel: RPC: Registered udp transport module.
Nov 22 04:42:27 localhost kernel: RPC: Registered tcp transport module.
Nov 22 04:42:27 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 22 04:42:27 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 22 04:42:27 localhost rpc.statd[445]: Version 2.5.4 starting
Nov 22 04:42:27 localhost rpc.statd[445]: Initializing NSM state
Nov 22 04:42:27 localhost rpc.idmapd[450]: Setting log level to 0
Nov 22 04:42:27 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 22 04:42:28 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 04:42:28 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 04:42:28 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 04:42:28 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 22 04:42:28 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 22 04:42:28 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 04:42:28 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 22 04:42:28 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 04:42:28 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 04:42:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 04:42:28 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 04:42:28 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 22 04:42:28 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 04:42:28 localhost systemd[1]: Reached target Network.
Nov 22 04:42:28 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 04:42:28 localhost systemd[1]: Starting dracut initqueue hook...
Nov 22 04:42:28 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 22 04:42:28 localhost systemd[1]: Reached target System Initialization.
Nov 22 04:42:28 localhost systemd[1]: Reached target Basic System.
Nov 22 04:42:28 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 22 04:42:28 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 22 04:42:28 localhost kernel:  vda: vda1
Nov 22 04:42:28 localhost kernel: libata version 3.00 loaded.
Nov 22 04:42:28 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 22 04:42:28 localhost kernel: scsi host0: ata_piix
Nov 22 04:42:28 localhost kernel: scsi host1: ata_piix
Nov 22 04:42:28 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 22 04:42:28 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 22 04:42:28 localhost systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:42:28 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 04:42:28 localhost systemd[1]: Reached target Initrd Root Device.
Nov 22 04:42:28 localhost kernel: ata1: found unknown device (class 0)
Nov 22 04:42:28 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 22 04:42:28 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 22 04:42:28 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 22 04:42:28 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 22 04:42:28 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 22 04:42:28 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 22 04:42:28 localhost systemd[1]: Finished dracut initqueue hook.
Nov 22 04:42:28 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 04:42:28 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 22 04:42:28 localhost systemd[1]: Reached target Remote File Systems.
Nov 22 04:42:28 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 22 04:42:28 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 22 04:42:28 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 22 04:42:28 localhost systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 22 04:42:28 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 04:42:28 localhost systemd[1]: Mounting /sysroot...
Nov 22 04:42:29 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 22 04:42:29 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 22 04:42:29 localhost kernel: XFS (vda1): Ending clean mount
Nov 22 04:42:29 localhost systemd[1]: Mounted /sysroot.
Nov 22 04:42:29 localhost systemd[1]: Reached target Initrd Root File System.
Nov 22 04:42:29 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 22 04:42:29 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 22 04:42:29 localhost systemd[1]: Reached target Initrd File Systems.
Nov 22 04:42:29 localhost systemd[1]: Reached target Initrd Default Target.
Nov 22 04:42:29 localhost systemd[1]: Starting dracut mount hook...
Nov 22 04:42:29 localhost systemd[1]: Finished dracut mount hook.
Nov 22 04:42:29 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 22 04:42:29 localhost rpc.idmapd[450]: exiting on signal 15
Nov 22 04:42:29 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 22 04:42:29 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 22 04:42:29 localhost systemd[1]: Stopped target Network.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Timer Units.
Nov 22 04:42:29 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 22 04:42:29 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Basic System.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Path Units.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Remote File Systems.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Slice Units.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Socket Units.
Nov 22 04:42:29 localhost systemd[1]: Stopped target System Initialization.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Local File Systems.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Swaps.
Nov 22 04:42:29 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut mount hook.
Nov 22 04:42:29 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 22 04:42:29 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 22 04:42:29 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 22 04:42:29 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 22 04:42:29 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 22 04:42:29 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 22 04:42:29 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 22 04:42:29 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 22 04:42:29 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 22 04:42:29 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 22 04:42:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 22 04:42:29 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 22 04:42:29 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Closed udev Control Socket.
Nov 22 04:42:29 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Closed udev Kernel Socket.
Nov 22 04:42:29 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 22 04:42:29 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 22 04:42:29 localhost systemd[1]: Starting Cleanup udev Database...
Nov 22 04:42:29 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 22 04:42:29 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 22 04:42:29 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Stopped Create System Users.
Nov 22 04:42:29 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 22 04:42:29 localhost systemd[1]: Finished Cleanup udev Database.
Nov 22 04:42:29 localhost systemd[1]: Reached target Switch Root.
Nov 22 04:42:29 localhost systemd[1]: Starting Switch Root...
Nov 22 04:42:29 localhost systemd[1]: Switching root.
Nov 22 04:42:29 localhost systemd-journald[309]: Journal stopped
Nov 22 04:42:30 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Nov 22 04:42:30 localhost kernel: audit: type=1404 audit(1763786549.935:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability open_perms=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 04:42:30 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 04:42:30 localhost kernel: audit: type=1403 audit(1763786550.117:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 22 04:42:30 localhost systemd[1]: Successfully loaded SELinux policy in 189.202ms.
Nov 22 04:42:30 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.873ms.
Nov 22 04:42:30 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 04:42:30 localhost systemd[1]: Detected virtualization kvm.
Nov 22 04:42:30 localhost systemd[1]: Detected architecture x86-64.
Nov 22 04:42:30 localhost systemd-rc-local-generator[639]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 04:42:30 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Stopped Switch Root.
Nov 22 04:42:30 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 22 04:42:30 localhost systemd[1]: Created slice Slice /system/getty.
Nov 22 04:42:30 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 22 04:42:30 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 22 04:42:30 localhost systemd[1]: Created slice User and Session Slice.
Nov 22 04:42:30 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 04:42:30 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 22 04:42:30 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 22 04:42:30 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 04:42:30 localhost systemd[1]: Stopped target Switch Root.
Nov 22 04:42:30 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 22 04:42:30 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 22 04:42:30 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 22 04:42:30 localhost systemd[1]: Reached target Path Units.
Nov 22 04:42:30 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 22 04:42:30 localhost systemd[1]: Reached target Slice Units.
Nov 22 04:42:30 localhost systemd[1]: Reached target Swaps.
Nov 22 04:42:30 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 22 04:42:30 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 22 04:42:30 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 22 04:42:30 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 22 04:42:30 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 22 04:42:30 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 04:42:30 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 04:42:30 localhost systemd[1]: Mounting Huge Pages File System...
Nov 22 04:42:30 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 22 04:42:30 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 22 04:42:30 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 22 04:42:30 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 04:42:30 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 04:42:30 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 04:42:30 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 22 04:42:30 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 22 04:42:30 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 22 04:42:30 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 22 04:42:30 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 22 04:42:30 localhost systemd[1]: Stopped Journal Service.
Nov 22 04:42:30 localhost kernel: fuse: init (API version 7.37)
Nov 22 04:42:30 localhost systemd[1]: Starting Journal Service...
Nov 22 04:42:30 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 04:42:30 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 22 04:42:30 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 04:42:30 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 22 04:42:30 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 22 04:42:30 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 04:42:30 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 04:42:30 localhost systemd[1]: Mounted Huge Pages File System.
Nov 22 04:42:30 localhost systemd-journald[680]: Journal started
Nov 22 04:42:30 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 04:42:30 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 22 04:42:30 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Started Journal Service.
Nov 22 04:42:30 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 22 04:42:30 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 22 04:42:30 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 22 04:42:30 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 22 04:42:30 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 04:42:30 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 04:42:30 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 22 04:42:30 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 22 04:42:30 localhost kernel: ACPI: bus type drm_connector registered
Nov 22 04:42:30 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 22 04:42:30 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 22 04:42:30 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 22 04:42:30 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 22 04:42:30 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 22 04:42:30 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 04:42:30 localhost systemd[1]: Mounting FUSE Control File System...
Nov 22 04:42:30 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 04:42:30 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 22 04:42:30 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 22 04:42:30 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 22 04:42:30 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 22 04:42:30 localhost systemd[1]: Starting Create System Users...
Nov 22 04:42:30 localhost systemd[1]: Mounted FUSE Control File System.
Nov 22 04:42:30 localhost systemd-journald[680]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 04:42:30 localhost systemd-journald[680]: Received client request to flush runtime journal.
Nov 22 04:42:30 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 22 04:42:30 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 22 04:42:30 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 04:42:30 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 04:42:30 localhost systemd[1]: Finished Create System Users.
Nov 22 04:42:30 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 04:42:30 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 04:42:30 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 22 04:42:30 localhost systemd[1]: Reached target Local File Systems.
Nov 22 04:42:30 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 22 04:42:31 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 22 04:42:31 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 22 04:42:31 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 22 04:42:31 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 22 04:42:31 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 22 04:42:31 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 04:42:31 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Nov 22 04:42:31 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 22 04:42:31 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 04:42:31 localhost systemd[1]: Starting Security Auditing Service...
Nov 22 04:42:31 localhost systemd[1]: Starting RPC Bind...
Nov 22 04:42:31 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 22 04:42:31 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 22 04:42:31 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 22 04:42:31 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 22 04:42:31 localhost systemd[1]: Started RPC Bind.
Nov 22 04:42:31 localhost augenrules[709]: /sbin/augenrules: No change
Nov 22 04:42:31 localhost augenrules[724]: No rules
Nov 22 04:42:31 localhost augenrules[724]: enabled 1
Nov 22 04:42:31 localhost augenrules[724]: failure 1
Nov 22 04:42:31 localhost augenrules[724]: pid 704
Nov 22 04:42:31 localhost augenrules[724]: rate_limit 0
Nov 22 04:42:31 localhost augenrules[724]: backlog_limit 8192
Nov 22 04:42:31 localhost augenrules[724]: lost 0
Nov 22 04:42:31 localhost augenrules[724]: backlog 1
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 04:42:31 localhost augenrules[724]: enabled 1
Nov 22 04:42:31 localhost augenrules[724]: failure 1
Nov 22 04:42:31 localhost augenrules[724]: pid 704
Nov 22 04:42:31 localhost augenrules[724]: rate_limit 0
Nov 22 04:42:31 localhost augenrules[724]: backlog_limit 8192
Nov 22 04:42:31 localhost augenrules[724]: lost 0
Nov 22 04:42:31 localhost augenrules[724]: backlog 0
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 04:42:31 localhost augenrules[724]: enabled 1
Nov 22 04:42:31 localhost augenrules[724]: failure 1
Nov 22 04:42:31 localhost augenrules[724]: pid 704
Nov 22 04:42:31 localhost augenrules[724]: rate_limit 0
Nov 22 04:42:31 localhost augenrules[724]: backlog_limit 8192
Nov 22 04:42:31 localhost augenrules[724]: lost 0
Nov 22 04:42:31 localhost augenrules[724]: backlog 0
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 04:42:31 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 04:42:31 localhost systemd[1]: Started Security Auditing Service.
Nov 22 04:42:31 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 22 04:42:31 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 22 04:42:31 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 22 04:42:31 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 04:42:31 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 22 04:42:31 localhost systemd[1]: Starting Update is Completed...
Nov 22 04:42:31 localhost systemd[1]: Finished Update is Completed.
Nov 22 04:42:31 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 04:42:31 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 04:42:31 localhost systemd[1]: Reached target System Initialization.
Nov 22 04:42:31 localhost systemd[1]: Started dnf makecache --timer.
Nov 22 04:42:31 localhost systemd[1]: Started Daily rotation of log files.
Nov 22 04:42:31 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 22 04:42:31 localhost systemd[1]: Reached target Timer Units.
Nov 22 04:42:31 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 04:42:31 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 22 04:42:31 localhost systemd[1]: Reached target Socket Units.
Nov 22 04:42:31 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 22 04:42:31 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 04:42:31 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 22 04:42:31 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 04:42:31 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 04:42:31 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 04:42:31 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 22 04:42:31 localhost systemd[1]: Reached target Basic System.
Nov 22 04:42:31 localhost dbus-broker-lau[757]: Ready
Nov 22 04:42:31 localhost systemd[1]: Starting NTP client/server...
Nov 22 04:42:31 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 22 04:42:31 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 22 04:42:31 localhost systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:42:31 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 22 04:42:31 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 22 04:42:31 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 22 04:42:31 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 22 04:42:31 localhost chronyd[781]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 04:42:31 localhost chronyd[781]: Loaded 0 symmetric keys
Nov 22 04:42:31 localhost chronyd[781]: Using right/UTC timezone to obtain leap second data
Nov 22 04:42:31 localhost chronyd[781]: Loaded seccomp filter (level 2)
Nov 22 04:42:31 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 22 04:42:31 localhost systemd[1]: Started irqbalance daemon.
Nov 22 04:42:31 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 22 04:42:31 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 04:42:31 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 04:42:31 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 04:42:31 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 22 04:42:31 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 22 04:42:31 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 22 04:42:31 localhost systemd[1]: Starting User Login Management...
Nov 22 04:42:31 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 22 04:42:31 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 22 04:42:31 localhost kernel: kvm_amd: TSC scaling supported
Nov 22 04:42:31 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 22 04:42:31 localhost kernel: kvm_amd: Nested Paging enabled
Nov 22 04:42:31 localhost kernel: kvm_amd: LBR virtualization supported
Nov 22 04:42:31 localhost systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 04:42:31 localhost systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 04:42:31 localhost kernel: Console: switching to colour dummy device 80x25
Nov 22 04:42:31 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 22 04:42:31 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 22 04:42:31 localhost kernel: [drm] features: -context_init
Nov 22 04:42:31 localhost kernel: [drm] number of scanouts: 1
Nov 22 04:42:31 localhost kernel: [drm] number of cap sets: 0
Nov 22 04:42:31 localhost systemd-logind[798]: New seat seat0.
Nov 22 04:42:31 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 22 04:42:31 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 22 04:42:31 localhost systemd[1]: Started User Login Management.
Nov 22 04:42:31 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 22 04:42:31 localhost systemd[1]: Started NTP client/server.
Nov 22 04:42:31 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 22 04:42:32 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 22 04:42:32 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 22 04:42:32 localhost iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Nov 22 04:42:32 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 22 04:42:32 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 22 Nov 2025 04:42:32 +0000. Up 7.20 seconds.
Nov 22 04:42:32 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 22 04:42:32 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 22 04:42:32 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpum_cq3dw.mount: Deactivated successfully.
Nov 22 04:42:32 localhost systemd[1]: Starting Hostname Service...
Nov 22 04:42:32 localhost systemd[1]: Started Hostname Service.
Nov 22 04:42:32 np0005531754.novalocal systemd-hostnamed[854]: Hostname set to <np0005531754.novalocal> (static)
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Reached target Preparation for Network.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Starting Network Manager...
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2305] NetworkManager (version 1.54.1-1.el9) is starting... (boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2313] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2496] manager[0x56317bbd3080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2573] hostname: hostname: using hostnamed
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2573] hostname: static hostname changed from (none) to "np0005531754.novalocal"
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2581] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2753] manager[0x56317bbd3080]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2754] manager[0x56317bbd3080]: rfkill: WWAN hardware radio set enabled
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2934] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2935] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2936] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2937] manager: Networking is enabled by state file
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2939] settings: Loaded settings plugin: keyfile (internal)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.2990] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3048] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3110] dhcp: init: Using DHCP client 'internal'
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3115] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3137] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3160] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3174] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3192] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3197] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Started Network Manager.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3243] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3254] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3258] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3261] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3264] device (eth0): carrier: link connected
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3271] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3281] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Reached target Network.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3291] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3297] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3298] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3301] manager: NetworkManager state is now CONNECTING
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3303] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3315] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3319] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3377] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3384] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3406] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3600] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3603] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3605] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3614] device (lo): Activation: successful, device activated.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3621] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3627] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3630] device (eth0): Activation: successful, device activated.
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3639] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 04:42:33 np0005531754.novalocal NetworkManager[858]: <info>  [1763786553.3643] manager: startup complete
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Reached target NFS client services.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: Reached target Remote File Systems.
Nov 22 04:42:33 np0005531754.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 22 Nov 2025 04:42:33 +0000. Up 8.36 seconds.
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.23         | 255.255.255.0 | global | fa:16:3e:56:fc:55 |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe56:fc55/64 |       .       |  link  | fa:16:3e:56:fc:55 |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 22 04:42:33 np0005531754.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Nov 22 04:42:34 np0005531754.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Generating public/private rsa key pair.
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key fingerprint is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: SHA256:Pd18xs2QwMu9vWq8bkW/9inXkdMB7tAJwH0P0TKKe1I root@np0005531754.novalocal
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key's randomart image is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +---[RSA 3072]----+
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |        ..o...o  |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |         . o.B o |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |          ..*oX  |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |         o Eo*.B.|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |        S = + +o@|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |         o o ..B+|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |          o . . *|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |             = =o|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |            ++*.o|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key fingerprint is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: SHA256:ie4Hw3m1Ucaaoxq1WmD3VMkZ+EcQeoE4kuoXHJx46Nw root@np0005531754.novalocal
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key's randomart image is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +---[ECDSA 256]---+
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |     + o . ==*   |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |    o B o o.O..  |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |   o = o ..B..   |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |    + E.o.B.. .  |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |   . o.*S* + .   |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |    ..B = o      |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |     ..O         |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |     .o .        |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |      ..         |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key fingerprint is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: SHA256:uRWf9TMjQNwacifyDPzhEUjlvB4XCAfporYhFomYSmU root@np0005531754.novalocal
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: The key's randomart image is:
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +--[ED25519 256]--+
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |        o+*=o    |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |   E     B*B.o   |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |..+ .   . @=B..  |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |oo o   . o Ooo.. |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |o   . . S .ooo +.|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |.  o +   o. o . +|
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |  . o o .  .     |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |     .           |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: |                 |
Nov 22 04:42:35 np0005531754.novalocal cloud-init[921]: +----[SHA256]-----+
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Reached target Network is Online.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting System Logging Service...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 22 04:42:35 np0005531754.novalocal sm-notify[1004]: Version 2.5.4 starting
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Permit User Sessions...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Finished Permit User Sessions.
Nov 22 04:42:35 np0005531754.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Nov 22 04:42:35 np0005531754.novalocal sshd[1006]: Server listening on :: port 22.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started Command Scheduler.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started Getty on tty1.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Reached target Login Prompts.
Nov 22 04:42:35 np0005531754.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Nov 22 04:42:35 np0005531754.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 22 04:42:35 np0005531754.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 55% if used.)
Nov 22 04:42:35 np0005531754.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 22 04:42:35 np0005531754.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 22 04:42:35 np0005531754.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Started System Logging Service.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Reached target Multi-User System.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 22 04:42:35 np0005531754.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:42:35 np0005531754.novalocal kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Nov 22 04:42:35 np0005531754.novalocal kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 22 04:42:35 np0005531754.novalocal cloud-init[1089]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 22 Nov 2025 04:42:35 +0000. Up 10.36 seconds.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 22 04:42:35 np0005531754.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1232]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 22 Nov 2025 04:42:36 +0000. Up 10.78 seconds.
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1252]: Unable to negotiate with 38.102.83.114 port 37686: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1260]: Connection closed by 38.102.83.114 port 37692 [preauth]
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1271]: #############################################################
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1274]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1273]: Unable to negotiate with 38.102.83.114 port 37700: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1280]: 256 SHA256:ie4Hw3m1Ucaaoxq1WmD3VMkZ+EcQeoE4kuoXHJx46Nw root@np0005531754.novalocal (ECDSA)
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1279]: Unable to negotiate with 38.102.83.114 port 37714: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1286]: 256 SHA256:uRWf9TMjQNwacifyDPzhEUjlvB4XCAfporYhFomYSmU root@np0005531754.novalocal (ED25519)
Nov 22 04:42:36 np0005531754.novalocal dracut[1284]: dracut-057-102.git20250818.el9
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1240]: Connection closed by 38.102.83.114 port 37670 [preauth]
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1290]: 3072 SHA256:Pd18xs2QwMu9vWq8bkW/9inXkdMB7tAJwH0P0TKKe1I root@np0005531754.novalocal (RSA)
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1292]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1294]: #############################################################
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1302]: Connection reset by 38.102.83.114 port 37736 [preauth]
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1310]: Unable to negotiate with 38.102.83.114 port 37750: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 22 04:42:36 np0005531754.novalocal cloud-init[1232]: Cloud-init v. 24.4-7.el9 finished at Sat, 22 Nov 2025 04:42:36 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.03 seconds
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1314]: Unable to negotiate with 38.102.83.114 port 37760: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 22 04:42:36 np0005531754.novalocal sshd-session[1285]: Connection closed by 38.102.83.114 port 37722 [preauth]
Nov 22 04:42:36 np0005531754.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 22 04:42:36 np0005531754.novalocal systemd[1]: Reached target Cloud-init target.
Nov 22 04:42:36 np0005531754.novalocal dracut[1291]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 22 04:42:37 np0005531754.novalocal dracut[1291]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: memstrack is not available
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 04:42:38 np0005531754.novalocal dracut[1291]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 04:42:38 np0005531754.novalocal chronyd[781]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 22 04:42:38 np0005531754.novalocal chronyd[781]: System clock wrong by 1.240351 seconds
Nov 22 04:42:39 np0005531754.novalocal chronyd[781]: System clock was stepped by 1.240351 seconds
Nov 22 04:42:39 np0005531754.novalocal chronyd[781]: System clock TAI offset set to 37 seconds
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: memstrack is not available
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 04:42:39 np0005531754.novalocal dracut[1291]: *** Including module: systemd ***
Nov 22 04:42:40 np0005531754.novalocal dracut[1291]: *** Including module: fips ***
Nov 22 04:42:40 np0005531754.novalocal dracut[1291]: *** Including module: systemd-initrd ***
Nov 22 04:42:40 np0005531754.novalocal dracut[1291]: *** Including module: i18n ***
Nov 22 04:42:40 np0005531754.novalocal dracut[1291]: *** Including module: drm ***
Nov 22 04:42:41 np0005531754.novalocal dracut[1291]: *** Including module: prefixdevname ***
Nov 22 04:42:41 np0005531754.novalocal dracut[1291]: *** Including module: kernel-modules ***
Nov 22 04:42:41 np0005531754.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: kernel-modules-extra ***
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: qemu ***
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: fstab-sys ***
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: rootfs-block ***
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: terminfo ***
Nov 22 04:42:42 np0005531754.novalocal dracut[1291]: *** Including module: udev-rules ***
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: Skipping udev rule: 91-permissions.rules
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: virtiofs ***
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: dracut-systemd ***
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 25 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 31 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 28 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 32 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 30 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 22 04:42:43 np0005531754.novalocal irqbalance[791]: IRQ 29 affinity is now unmanaged
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: usrmount ***
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: base ***
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: fs-lib ***
Nov 22 04:42:43 np0005531754.novalocal dracut[1291]: *** Including module: kdumpbase ***
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:   microcode_ctl module: mangling fw_dir
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 22 04:42:44 np0005531754.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]: *** Including module: openssl ***
Nov 22 04:42:44 np0005531754.novalocal dracut[1291]: *** Including module: shutdown ***
Nov 22 04:42:45 np0005531754.novalocal dracut[1291]: *** Including module: squash ***
Nov 22 04:42:45 np0005531754.novalocal dracut[1291]: *** Including modules done ***
Nov 22 04:42:45 np0005531754.novalocal dracut[1291]: *** Installing kernel module dependencies ***
Nov 22 04:42:46 np0005531754.novalocal dracut[1291]: *** Installing kernel module dependencies done ***
Nov 22 04:42:46 np0005531754.novalocal dracut[1291]: *** Resolving executable dependencies ***
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: *** Resolving executable dependencies done ***
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: *** Generating early-microcode cpio image ***
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: *** Store current command line parameters ***
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: Stored kernel commandline:
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: No dracut internal kernel commandline stored in the initramfs
Nov 22 04:42:48 np0005531754.novalocal dracut[1291]: *** Install squash loader ***
Nov 22 04:42:49 np0005531754.novalocal dracut[1291]: *** Squashing the files inside the initramfs ***
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: *** Squashing the files inside the initramfs done ***
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: *** Hardlinking files ***
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Mode:           real
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Files:          50
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Linked:         0 files
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Compared:       0 xattrs
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Compared:       0 files
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Saved:          0 B
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: Duration:       0.000608 seconds
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: *** Hardlinking files done ***
Nov 22 04:42:50 np0005531754.novalocal dracut[1291]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 22 04:42:51 np0005531754.novalocal kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Nov 22 04:42:51 np0005531754.novalocal kdumpctl[1014]: kdump: Starting kdump: [OK]
Nov 22 04:42:51 np0005531754.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 22 04:42:51 np0005531754.novalocal systemd[1]: Startup finished in 1.591s (kernel) + 2.997s (initrd) + 20.766s (userspace) = 25.355s.
Nov 22 04:43:04 np0005531754.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 04:43:06 np0005531754.novalocal sshd-session[4298]: Accepted publickey for zuul from 38.102.83.114 port 40784 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 22 04:43:06 np0005531754.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 22 04:43:06 np0005531754.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 22 04:43:06 np0005531754.novalocal systemd-logind[798]: New session 1 of user zuul.
Nov 22 04:43:06 np0005531754.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 22 04:43:06 np0005531754.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Queued start job for default target Main User Target.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Created slice User Application Slice.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Reached target Paths.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Reached target Timers.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Starting D-Bus User Message Bus Socket...
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Starting Create User's Volatile Files and Directories...
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Finished Create User's Volatile Files and Directories.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Listening on D-Bus User Message Bus Socket.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Reached target Sockets.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Reached target Basic System.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Reached target Main User Target.
Nov 22 04:43:06 np0005531754.novalocal systemd[4302]: Startup finished in 229ms.
Nov 22 04:43:06 np0005531754.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 22 04:43:07 np0005531754.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 22 04:43:07 np0005531754.novalocal sshd-session[4298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:43:07 np0005531754.novalocal python3[4384]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 04:43:10 np0005531754.novalocal python3[4412]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 04:43:16 np0005531754.novalocal python3[4470]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 04:43:17 np0005531754.novalocal python3[4510]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 22 04:43:19 np0005531754.novalocal python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBpUWqzN7rKnv/+ddt39fBVp0U+zbBXsmG93ls34HyWhLWKB/ajnai0sKL5TB5FWWUSInTMxoNNLgm1UlPKTui3jEvx7cA8oUrOI+sUharb/CsGk33xi4JXPppoauT2w0NMmnoIOlYiN9tGg7anp1XQDD7pu+J6Xr1NqJUceEcm/yz7o+AG4RoW+jQozuApioBPhMkEnO/ss7iAGQuSWghuxIURVUnTmZWxyYDyQkHEbnNr1RddXUKURwQnTRkwtzS0+b5DzwH1+YfNxomFjO+6ThSY/fEU+EvHoUdwGCqHGPw1TPC9Oq/n4iRkRi2YNW7beU9LZatBiGBXwXYkuL+QgGxLCoJuQ/PAk+d72wXT70X0iT9VvAmpsoqE9/Ld3x8ec4EnIsok9d8l3MnYnUn2OdXXVKtkyr1xYkULUS4sewXcDd5Vwij/jjVeWt5WN1bJTaxU9RDgEmuG1DJaGRY1el2Z9kcqtGsGjUmzgsVKAr/x4ISz6yF91AKyuhOKyk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:19 np0005531754.novalocal python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:20 np0005531754.novalocal python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:20 np0005531754.novalocal python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786599.8293571-207-193904859484822/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=601f438a9b1842e89a60d702fa83ffa8_id_rsa follow=False checksum=33ab2b9faf404664c2a582d5d25b5bbcf9a6dc98 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:21 np0005531754.novalocal python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:21 np0005531754.novalocal python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786600.9384837-240-256143430379336/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=601f438a9b1842e89a60d702fa83ffa8_id_rsa.pub follow=False checksum=9c9dcf22f193e28444145b04c9ea0edc70a98c3b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:23 np0005531754.novalocal python3[4972]: ansible-ping Invoked with data=pong
Nov 22 04:43:24 np0005531754.novalocal python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 04:43:25 np0005531754.novalocal python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 22 04:43:26 np0005531754.novalocal python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:27 np0005531754.novalocal python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:27 np0005531754.novalocal python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:27 np0005531754.novalocal python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:28 np0005531754.novalocal python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:28 np0005531754.novalocal python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:30 np0005531754.novalocal sudo[5230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmimxrslhpsmtpjoqulrbjxjwqfunqxi ; /usr/bin/python3'
Nov 22 04:43:30 np0005531754.novalocal sudo[5230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:30 np0005531754.novalocal python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:30 np0005531754.novalocal sudo[5230]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:30 np0005531754.novalocal sudo[5308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwzzaqvicrwsnlchzhiwmgegtlwojza ; /usr/bin/python3'
Nov 22 04:43:30 np0005531754.novalocal sudo[5308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:31 np0005531754.novalocal python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:31 np0005531754.novalocal sudo[5308]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:31 np0005531754.novalocal sudo[5381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfjxsggvewnfezpuamydtzpgixdgxish ; /usr/bin/python3'
Nov 22 04:43:31 np0005531754.novalocal sudo[5381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:31 np0005531754.novalocal python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786610.544543-21-26045239449004/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:31 np0005531754.novalocal sudo[5381]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:32 np0005531754.novalocal python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:32 np0005531754.novalocal python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:32 np0005531754.novalocal python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:33 np0005531754.novalocal python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:33 np0005531754.novalocal python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:33 np0005531754.novalocal python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:34 np0005531754.novalocal python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:34 np0005531754.novalocal python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:34 np0005531754.novalocal python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:35 np0005531754.novalocal python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:35 np0005531754.novalocal python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:35 np0005531754.novalocal python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:36 np0005531754.novalocal python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:36 np0005531754.novalocal python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:36 np0005531754.novalocal python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:36 np0005531754.novalocal python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:37 np0005531754.novalocal python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:37 np0005531754.novalocal python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:37 np0005531754.novalocal python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:38 np0005531754.novalocal python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:38 np0005531754.novalocal python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:38 np0005531754.novalocal python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:38 np0005531754.novalocal python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:39 np0005531754.novalocal python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:39 np0005531754.novalocal python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:39 np0005531754.novalocal python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:43:42 np0005531754.novalocal sudo[6055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lszvvsvzwqcvmicuvgouelgwsfhdzkkf ; /usr/bin/python3'
Nov 22 04:43:42 np0005531754.novalocal sudo[6055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:42 np0005531754.novalocal python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 04:43:42 np0005531754.novalocal systemd[1]: Starting Time & Date Service...
Nov 22 04:43:42 np0005531754.novalocal systemd[1]: Started Time & Date Service.
Nov 22 04:43:42 np0005531754.novalocal systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Nov 22 04:43:42 np0005531754.novalocal sudo[6055]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:43 np0005531754.novalocal sudo[6086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xspbqfcvsqsjnontodkswtwgejzklzto ; /usr/bin/python3'
Nov 22 04:43:43 np0005531754.novalocal sudo[6086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:43 np0005531754.novalocal python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:43 np0005531754.novalocal sudo[6086]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:44 np0005531754.novalocal python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:44 np0005531754.novalocal python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763786624.0445921-153-149998034230791/source _original_basename=tmp73cujooz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:45 np0005531754.novalocal python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:45 np0005531754.novalocal python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763786625.0436249-183-78888262236887/source _original_basename=tmp5phomsql follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:46 np0005531754.novalocal sudo[6506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhornmhgegxbwcfajthsdvwatebgzarf ; /usr/bin/python3'
Nov 22 04:43:46 np0005531754.novalocal sudo[6506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:46 np0005531754.novalocal python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:46 np0005531754.novalocal sudo[6506]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:46 np0005531754.novalocal sudo[6579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzcgbmsttvpgxlsbmzyvcqngjrcwfnlr ; /usr/bin/python3'
Nov 22 04:43:46 np0005531754.novalocal sudo[6579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:47 np0005531754.novalocal python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763786626.3207955-231-119099449908049/source _original_basename=tmp_sojauz2 follow=False checksum=6bf095e75b543d66829428b8a294812d38465cfe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:47 np0005531754.novalocal sudo[6579]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:47 np0005531754.novalocal python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:43:48 np0005531754.novalocal python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:43:48 np0005531754.novalocal sudo[6733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bodzgswjzuuisgwuzmtbkfdhqesdxtgw ; /usr/bin/python3'
Nov 22 04:43:48 np0005531754.novalocal sudo[6733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:48 np0005531754.novalocal python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:43:48 np0005531754.novalocal sudo[6733]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:48 np0005531754.novalocal sudo[6806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savonswamxgnoclzbpgtpzfuzfseikyh ; /usr/bin/python3'
Nov 22 04:43:48 np0005531754.novalocal sudo[6806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:49 np0005531754.novalocal python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786628.274287-273-232778249592164/source _original_basename=tmpl3bvw0ip follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:43:49 np0005531754.novalocal sudo[6806]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:49 np0005531754.novalocal sudo[6857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouhaaauvycxhmfjeapltzkwaukrvawky ; /usr/bin/python3'
Nov 22 04:43:49 np0005531754.novalocal sudo[6857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:43:49 np0005531754.novalocal python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-18aa-7dab-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:43:49 np0005531754.novalocal sudo[6857]: pam_unix(sudo:session): session closed for user root
Nov 22 04:43:50 np0005531754.novalocal python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-18aa-7dab-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 22 04:43:51 np0005531754.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:44:09 np0005531754.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrsiyswdojeggpqpxfosonsdoirowbpn ; /usr/bin/python3'
Nov 22 04:44:09 np0005531754.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:44:09 np0005531754.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:44:09 np0005531754.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Nov 22 04:44:12 np0005531754.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 22 04:44:46 np0005531754.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 22 04:44:46 np0005531754.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7357] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 04:44:46 np0005531754.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7609] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7641] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7647] device (eth1): carrier: link connected
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7649] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7656] policy: auto-activating connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7661] device (eth1): Activation: starting connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7662] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7666] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7671] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 04:44:46 np0005531754.novalocal NetworkManager[858]: <info>  [1763786686.7676] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:44:48 np0005531754.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-b797-5a6b-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:44:57 np0005531754.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhxtpgljgqgpfugpuxjhjczqobqjgtlh ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 04:44:57 np0005531754.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:44:58 np0005531754.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:44:58 np0005531754.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Nov 22 04:44:58 np0005531754.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iegtzcfpnblcsqjxwmqtijtizutmfwcl ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 04:44:58 np0005531754.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:44:58 np0005531754.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763786697.712137-102-67995338701127/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=9581fc4aa17865d42e880f9feece8ad2d131da8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:44:58 np0005531754.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Nov 22 04:44:59 np0005531754.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmeanqnkfuzbnmhcpjfbdwmxyvpczzrj ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 04:44:59 np0005531754.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:44:59 np0005531754.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Stopping Network Manager...
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4754] caught SIGTERM, shutting down normally.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4770] dhcp4 (eth0): canceled DHCP transaction
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4771] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4771] dhcp4 (eth0): state changed no lease
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4779] manager: NetworkManager state is now CONNECTING
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4985] dhcp4 (eth1): canceled DHCP transaction
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.4986] dhcp4 (eth1): state changed no lease
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[858]: <info>  [1763786699.5270] exiting (success)
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Stopped Network Manager.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: NetworkManager.service: Consumed 1.375s CPU time, 10.0M memory peak.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Starting Network Manager...
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.6213] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.6214] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.6282] manager[0x5643cc391070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Starting Hostname Service...
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Started Hostname Service.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7443] hostname: hostname: using hostnamed
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7446] hostname: static hostname changed from (none) to "np0005531754.novalocal"
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7454] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7460] manager[0x5643cc391070]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7461] manager[0x5643cc391070]: rfkill: WWAN hardware radio set enabled
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7508] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7508] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7510] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7511] manager: Networking is enabled by state file
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7516] settings: Loaded settings plugin: keyfile (internal)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7524] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7571] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7589] dhcp: init: Using DHCP client 'internal'
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7594] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7604] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7614] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7629] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7641] device (eth0): carrier: link connected
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7649] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7658] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7659] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7673] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7686] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7698] device (eth1): carrier: link connected
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7705] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7715] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18) (indicated)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7716] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7726] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7737] device (eth1): Activation: starting connection 'Wired connection 1' (b63a3bd3-2d39-3e26-9215-4f6c298d6a18)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7747] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Started Network Manager.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7755] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7759] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7762] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7766] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7771] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7775] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7779] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7782] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7795] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7799] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7811] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7816] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7840] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7848] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7857] device (lo): Activation: successful, device activated.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7869] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.7881] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 04:44:59 np0005531754.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 04:44:59 np0005531754.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8822] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8858] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8862] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8870] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8879] device (eth0): Activation: successful, device activated.
Nov 22 04:44:59 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786699.8888] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 04:45:00 np0005531754.novalocal python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-b797-5a6b-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:45:09 np0005531754.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 04:45:29 np0005531754.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.5669] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 04:45:44 np0005531754.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 04:45:44 np0005531754.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.5985] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.5988] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.5999] device (eth1): Activation: successful, device activated.
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6009] manager: startup complete
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6011] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <warn>  [1763786744.6018] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6031] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6135] dhcp4 (eth1): canceled DHCP transaction
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6136] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6136] dhcp4 (eth1): state changed no lease
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6156] policy: auto-activating connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6163] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6165] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6173] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6183] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6197] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6248] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6251] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 04:45:44 np0005531754.novalocal NetworkManager[7192]: <info>  [1763786744.6261] device (eth1): Activation: successful, device activated.
Nov 22 04:45:54 np0005531754.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 04:45:57 np0005531754.novalocal systemd[4302]: Starting Mark boot as successful...
Nov 22 04:45:57 np0005531754.novalocal systemd[4302]: Finished Mark boot as successful.
Nov 22 04:45:58 np0005531754.novalocal sudo[7363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltbpfkvxijhwleiohiyuuieqihyubhlu ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 04:45:58 np0005531754.novalocal sudo[7363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:45:58 np0005531754.novalocal python3[7365]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:45:58 np0005531754.novalocal sudo[7363]: pam_unix(sudo:session): session closed for user root
Nov 22 04:45:58 np0005531754.novalocal sudo[7436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alhqfpsgszpiertptdycemqlyiiqsjce ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 04:45:58 np0005531754.novalocal sudo[7436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:45:59 np0005531754.novalocal python3[7438]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763786758.2375803-267-167195522605787/source _original_basename=tmpcngdf3al follow=False checksum=1c0a7dfd166548c9d6844776e0079fa19f47fafb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:45:59 np0005531754.novalocal sudo[7436]: pam_unix(sudo:session): session closed for user root
Nov 22 04:46:45 np0005531754.novalocal sshd-session[7463]: Connection closed by authenticating user root 103.175.73.3 port 46332 [preauth]
Nov 22 04:46:46 np0005531754.novalocal sshd-session[7465]: Connection closed by authenticating user root 103.175.73.3 port 46974 [preauth]
Nov 22 04:46:48 np0005531754.novalocal sshd-session[7467]: Connection closed by authenticating user root 103.175.73.3 port 47762 [preauth]
Nov 22 04:46:49 np0005531754.novalocal sshd-session[7469]: Connection closed by authenticating user root 103.175.73.3 port 48410 [preauth]
Nov 22 04:46:51 np0005531754.novalocal sshd-session[7471]: Connection closed by authenticating user root 103.175.73.3 port 49048 [preauth]
Nov 22 04:46:52 np0005531754.novalocal sshd-session[7473]: Connection closed by authenticating user root 103.175.73.3 port 49596 [preauth]
Nov 22 04:46:53 np0005531754.novalocal sshd-session[7475]: Connection closed by authenticating user root 103.175.73.3 port 50282 [preauth]
Nov 22 04:46:55 np0005531754.novalocal sshd-session[7477]: Connection closed by authenticating user root 103.175.73.3 port 51146 [preauth]
Nov 22 04:46:56 np0005531754.novalocal sshd-session[7479]: Connection closed by authenticating user root 103.175.73.3 port 51702 [preauth]
Nov 22 04:46:57 np0005531754.novalocal sshd-session[7481]: Connection closed by authenticating user root 103.175.73.3 port 52304 [preauth]
Nov 22 04:46:59 np0005531754.novalocal sshd-session[4311]: Received disconnect from 38.102.83.114 port 40784:11: disconnected by user
Nov 22 04:46:59 np0005531754.novalocal sshd-session[4311]: Disconnected from user zuul 38.102.83.114 port 40784
Nov 22 04:46:59 np0005531754.novalocal sshd-session[4298]: pam_unix(sshd:session): session closed for user zuul
Nov 22 04:46:59 np0005531754.novalocal systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Nov 22 04:46:59 np0005531754.novalocal sshd-session[7483]: Connection closed by authenticating user root 103.175.73.3 port 53016 [preauth]
Nov 22 04:47:00 np0005531754.novalocal sshd-session[7485]: Connection closed by authenticating user root 103.175.73.3 port 53770 [preauth]
Nov 22 04:47:02 np0005531754.novalocal sshd-session[7487]: Connection closed by authenticating user root 103.175.73.3 port 54542 [preauth]
Nov 22 04:47:03 np0005531754.novalocal sshd-session[7489]: Connection closed by authenticating user root 103.175.73.3 port 55088 [preauth]
Nov 22 04:47:04 np0005531754.novalocal sshd-session[7491]: Connection closed by authenticating user root 103.175.73.3 port 55774 [preauth]
Nov 22 04:47:06 np0005531754.novalocal sshd-session[7493]: Connection closed by authenticating user root 103.175.73.3 port 56508 [preauth]
Nov 22 04:47:07 np0005531754.novalocal sshd-session[7495]: Connection closed by authenticating user root 103.175.73.3 port 57326 [preauth]
Nov 22 04:47:08 np0005531754.novalocal sshd-session[7497]: Connection closed by authenticating user root 103.175.73.3 port 57826 [preauth]
Nov 22 04:47:10 np0005531754.novalocal sshd-session[7499]: Connection closed by authenticating user root 103.175.73.3 port 58398 [preauth]
Nov 22 04:47:11 np0005531754.novalocal sshd-session[7501]: Connection closed by authenticating user root 103.175.73.3 port 59120 [preauth]
Nov 22 04:47:13 np0005531754.novalocal sshd-session[7503]: Connection closed by authenticating user root 103.175.73.3 port 59860 [preauth]
Nov 22 04:47:14 np0005531754.novalocal sshd-session[7505]: Connection closed by authenticating user root 103.175.73.3 port 60652 [preauth]
Nov 22 04:47:15 np0005531754.novalocal sshd-session[7507]: Connection closed by authenticating user root 103.175.73.3 port 32940 [preauth]
Nov 22 04:47:17 np0005531754.novalocal sshd-session[7509]: Connection closed by authenticating user root 103.175.73.3 port 33616 [preauth]
Nov 22 04:47:18 np0005531754.novalocal sshd-session[7511]: Connection closed by authenticating user root 103.175.73.3 port 34458 [preauth]
Nov 22 04:47:20 np0005531754.novalocal sshd-session[7513]: Connection closed by authenticating user root 103.175.73.3 port 35216 [preauth]
Nov 22 04:47:21 np0005531754.novalocal sshd-session[7515]: Connection closed by authenticating user root 103.175.73.3 port 35744 [preauth]
Nov 22 04:47:22 np0005531754.novalocal sshd-session[7517]: Connection closed by authenticating user root 103.175.73.3 port 36302 [preauth]
Nov 22 04:47:24 np0005531754.novalocal sshd-session[7519]: Connection closed by authenticating user root 103.175.73.3 port 36998 [preauth]
Nov 22 04:47:25 np0005531754.novalocal sshd-session[7521]: Connection closed by authenticating user root 103.175.73.3 port 37796 [preauth]
Nov 22 04:47:26 np0005531754.novalocal sshd-session[7523]: Connection closed by authenticating user root 103.175.73.3 port 38348 [preauth]
Nov 22 04:47:28 np0005531754.novalocal sshd-session[7525]: Connection closed by authenticating user root 103.175.73.3 port 38988 [preauth]
Nov 22 04:47:29 np0005531754.novalocal sshd-session[7527]: Connection closed by authenticating user root 103.175.73.3 port 39812 [preauth]
Nov 22 04:47:31 np0005531754.novalocal sshd-session[7529]: Connection closed by authenticating user root 103.175.73.3 port 40422 [preauth]
Nov 22 04:47:32 np0005531754.novalocal sshd-session[7531]: Connection closed by authenticating user root 103.175.73.3 port 41090 [preauth]
Nov 22 04:47:33 np0005531754.novalocal sshd-session[7533]: Connection closed by authenticating user root 103.175.73.3 port 41824 [preauth]
Nov 22 04:47:35 np0005531754.novalocal sshd-session[7535]: Connection closed by authenticating user root 103.175.73.3 port 42376 [preauth]
Nov 22 04:47:36 np0005531754.novalocal sshd-session[7537]: Connection closed by authenticating user root 103.175.73.3 port 43016 [preauth]
Nov 22 04:47:37 np0005531754.novalocal sshd-session[7539]: Connection closed by authenticating user root 103.175.73.3 port 43894 [preauth]
Nov 22 04:47:39 np0005531754.novalocal sshd-session[7541]: Connection closed by authenticating user root 103.175.73.3 port 44446 [preauth]
Nov 22 04:47:40 np0005531754.novalocal sshd-session[7543]: Connection closed by authenticating user root 103.175.73.3 port 45318 [preauth]
Nov 22 04:47:42 np0005531754.novalocal sshd-session[7545]: Connection closed by authenticating user root 103.175.73.3 port 45902 [preauth]
Nov 22 04:47:43 np0005531754.novalocal sshd-session[7547]: Connection closed by authenticating user root 103.175.73.3 port 46492 [preauth]
Nov 22 04:47:44 np0005531754.novalocal sshd-session[7549]: Connection closed by authenticating user root 103.175.73.3 port 47270 [preauth]
Nov 22 04:47:46 np0005531754.novalocal sshd-session[7551]: Connection closed by authenticating user root 103.175.73.3 port 47876 [preauth]
Nov 22 04:47:47 np0005531754.novalocal sshd-session[7553]: Connection closed by authenticating user root 103.175.73.3 port 48428 [preauth]
Nov 22 04:47:48 np0005531754.novalocal sshd-session[7555]: Connection closed by authenticating user root 103.175.73.3 port 49230 [preauth]
Nov 22 04:47:50 np0005531754.novalocal sshd-session[7557]: Connection closed by authenticating user root 103.175.73.3 port 50022 [preauth]
Nov 22 04:47:51 np0005531754.novalocal sshd-session[7559]: Connection closed by authenticating user root 103.175.73.3 port 50742 [preauth]
Nov 22 04:47:53 np0005531754.novalocal sshd-session[7561]: Connection closed by authenticating user root 103.175.73.3 port 51458 [preauth]
Nov 22 04:47:54 np0005531754.novalocal sshd-session[7563]: Connection closed by authenticating user root 103.175.73.3 port 52040 [preauth]
Nov 22 04:47:55 np0005531754.novalocal sshd-session[7565]: Connection closed by authenticating user root 103.175.73.3 port 52940 [preauth]
Nov 22 04:47:57 np0005531754.novalocal sshd-session[7567]: Connection closed by authenticating user root 103.175.73.3 port 53562 [preauth]
Nov 22 04:47:58 np0005531754.novalocal sshd-session[7569]: Connection closed by authenticating user root 103.175.73.3 port 54086 [preauth]
Nov 22 04:47:59 np0005531754.novalocal sshd-session[7571]: Connection closed by authenticating user root 103.175.73.3 port 54712 [preauth]
Nov 22 04:48:01 np0005531754.novalocal sshd-session[7574]: Connection closed by authenticating user root 103.175.73.3 port 55558 [preauth]
Nov 22 04:48:02 np0005531754.novalocal sshd-session[7576]: Connection closed by authenticating user root 103.175.73.3 port 56306 [preauth]
Nov 22 04:48:04 np0005531754.novalocal sshd-session[7578]: Connection closed by authenticating user root 103.175.73.3 port 57060 [preauth]
Nov 22 04:48:05 np0005531754.novalocal sshd-session[7580]: Connection closed by authenticating user root 103.175.73.3 port 57604 [preauth]
Nov 22 04:48:06 np0005531754.novalocal sshd-session[7582]: Connection closed by authenticating user root 103.175.73.3 port 58394 [preauth]
Nov 22 04:48:08 np0005531754.novalocal sshd-session[7584]: Connection closed by authenticating user root 103.175.73.3 port 59212 [preauth]
Nov 22 04:48:09 np0005531754.novalocal sshd-session[7586]: Connection closed by authenticating user root 103.175.73.3 port 59684 [preauth]
Nov 22 04:48:10 np0005531754.novalocal sshd-session[7588]: Connection closed by authenticating user root 103.175.73.3 port 60278 [preauth]
Nov 22 04:48:12 np0005531754.novalocal sshd-session[7590]: Connection closed by authenticating user root 103.175.73.3 port 32868 [preauth]
Nov 22 04:48:13 np0005531754.novalocal sshd-session[7592]: Connection closed by authenticating user root 103.175.73.3 port 33736 [preauth]
Nov 22 04:48:15 np0005531754.novalocal sshd-session[7594]: Connection closed by authenticating user root 103.175.73.3 port 34532 [preauth]
Nov 22 04:48:16 np0005531754.novalocal sshd-session[7596]: Connection closed by authenticating user root 103.175.73.3 port 35076 [preauth]
Nov 22 04:48:17 np0005531754.novalocal sshd-session[7598]: Connection closed by authenticating user root 103.175.73.3 port 35836 [preauth]
Nov 22 04:48:19 np0005531754.novalocal sshd-session[7600]: Connection closed by authenticating user root 103.175.73.3 port 36536 [preauth]
Nov 22 04:48:20 np0005531754.novalocal sshd-session[7602]: Connection closed by authenticating user root 103.175.73.3 port 37182 [preauth]
Nov 22 04:48:21 np0005531754.novalocal sshd-session[7604]: Connection closed by authenticating user root 103.175.73.3 port 37716 [preauth]
Nov 22 04:48:23 np0005531754.novalocal sshd-session[7606]: Connection closed by authenticating user root 103.175.73.3 port 38412 [preauth]
Nov 22 04:48:24 np0005531754.novalocal sshd-session[7608]: Connection closed by authenticating user root 103.175.73.3 port 39132 [preauth]
Nov 22 04:48:26 np0005531754.novalocal sshd-session[7610]: Connection closed by authenticating user root 103.175.73.3 port 40068 [preauth]
Nov 22 04:48:27 np0005531754.novalocal sshd-session[7612]: Connection closed by authenticating user root 103.175.73.3 port 40732 [preauth]
Nov 22 04:48:28 np0005531754.novalocal sshd-session[7614]: Connection closed by authenticating user root 103.175.73.3 port 41284 [preauth]
Nov 22 04:48:30 np0005531754.novalocal sshd-session[7616]: Connection closed by authenticating user root 103.175.73.3 port 42080 [preauth]
Nov 22 04:48:31 np0005531754.novalocal sshd-session[7618]: Connection closed by authenticating user root 103.175.73.3 port 42736 [preauth]
Nov 22 04:48:32 np0005531754.novalocal sshd-session[7620]: Connection closed by authenticating user root 103.175.73.3 port 43338 [preauth]
Nov 22 04:48:34 np0005531754.novalocal sshd-session[7622]: Connection closed by authenticating user root 103.175.73.3 port 44060 [preauth]
Nov 22 04:48:35 np0005531754.novalocal sshd-session[7624]: Connection closed by authenticating user root 103.175.73.3 port 44678 [preauth]
Nov 22 04:48:36 np0005531754.novalocal sshd-session[7626]: Connection closed by authenticating user root 103.175.73.3 port 45550 [preauth]
Nov 22 04:48:38 np0005531754.novalocal sshd-session[7629]: Invalid user user from 103.175.73.3 port 46524
Nov 22 04:48:38 np0005531754.novalocal sshd-session[7629]: Connection closed by invalid user user 103.175.73.3 port 46524 [preauth]
Nov 22 04:48:39 np0005531754.novalocal sshd-session[7631]: Invalid user user from 103.175.73.3 port 47016
Nov 22 04:48:39 np0005531754.novalocal sshd-session[7631]: Connection closed by invalid user user 103.175.73.3 port 47016 [preauth]
Nov 22 04:48:41 np0005531754.novalocal sshd-session[7633]: Invalid user user from 103.175.73.3 port 47926
Nov 22 04:48:41 np0005531754.novalocal sshd-session[7633]: Connection closed by invalid user user 103.175.73.3 port 47926 [preauth]
Nov 22 04:48:42 np0005531754.novalocal sshd-session[7635]: Invalid user user from 103.175.73.3 port 48548
Nov 22 04:48:42 np0005531754.novalocal sshd-session[7635]: Connection closed by invalid user user 103.175.73.3 port 48548 [preauth]
Nov 22 04:48:43 np0005531754.novalocal sshd-session[7637]: Invalid user user from 103.175.73.3 port 49144
Nov 22 04:48:44 np0005531754.novalocal sshd-session[7637]: Connection closed by invalid user user 103.175.73.3 port 49144 [preauth]
Nov 22 04:48:45 np0005531754.novalocal sshd-session[7639]: Invalid user user from 103.175.73.3 port 49798
Nov 22 04:48:45 np0005531754.novalocal sshd-session[7639]: Connection closed by invalid user user 103.175.73.3 port 49798 [preauth]
Nov 22 04:48:46 np0005531754.novalocal sshd-session[7641]: Invalid user user from 103.175.73.3 port 50646
Nov 22 04:48:46 np0005531754.novalocal sshd-session[7641]: Connection closed by invalid user user 103.175.73.3 port 50646 [preauth]
Nov 22 04:48:47 np0005531754.novalocal sshd-session[7643]: Invalid user user from 103.175.73.3 port 51240
Nov 22 04:48:48 np0005531754.novalocal sshd-session[7643]: Connection closed by invalid user user 103.175.73.3 port 51240 [preauth]
Nov 22 04:48:49 np0005531754.novalocal sshd-session[7645]: Invalid user user from 103.175.73.3 port 52284
Nov 22 04:48:49 np0005531754.novalocal sshd-session[7645]: Connection closed by invalid user user 103.175.73.3 port 52284 [preauth]
Nov 22 04:48:50 np0005531754.novalocal sshd-session[7647]: Invalid user user from 103.175.73.3 port 52946
Nov 22 04:48:50 np0005531754.novalocal sshd-session[7647]: Connection closed by invalid user user 103.175.73.3 port 52946 [preauth]
Nov 22 04:48:52 np0005531754.novalocal sshd-session[7649]: Invalid user user from 103.175.73.3 port 53700
Nov 22 04:48:52 np0005531754.novalocal sshd-session[7649]: Connection closed by invalid user user 103.175.73.3 port 53700 [preauth]
Nov 22 04:48:53 np0005531754.novalocal sshd-session[7651]: Invalid user user from 103.175.73.3 port 54242
Nov 22 04:48:53 np0005531754.novalocal sshd-session[7651]: Connection closed by invalid user user 103.175.73.3 port 54242 [preauth]
Nov 22 04:48:54 np0005531754.novalocal sshd-session[7653]: Invalid user user from 103.175.73.3 port 54838
Nov 22 04:48:55 np0005531754.novalocal sshd-session[7653]: Connection closed by invalid user user 103.175.73.3 port 54838 [preauth]
Nov 22 04:48:56 np0005531754.novalocal sshd-session[7655]: Invalid user user from 103.175.73.3 port 55616
Nov 22 04:48:56 np0005531754.novalocal sshd-session[7655]: Connection closed by invalid user user 103.175.73.3 port 55616 [preauth]
Nov 22 04:48:57 np0005531754.novalocal systemd[4302]: Created slice User Background Tasks Slice.
Nov 22 04:48:57 np0005531754.novalocal systemd[4302]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 04:48:57 np0005531754.novalocal systemd[4302]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 04:48:57 np0005531754.novalocal sshd-session[7657]: Invalid user user from 103.175.73.3 port 56360
Nov 22 04:48:57 np0005531754.novalocal sshd-session[7657]: Connection closed by invalid user user 103.175.73.3 port 56360 [preauth]
Nov 22 04:48:58 np0005531754.novalocal sshd-session[7661]: Invalid user user from 103.175.73.3 port 56916
Nov 22 04:48:59 np0005531754.novalocal sshd-session[7661]: Connection closed by invalid user user 103.175.73.3 port 56916 [preauth]
Nov 22 04:49:00 np0005531754.novalocal sshd-session[7663]: Invalid user user from 103.175.73.3 port 57732
Nov 22 04:49:00 np0005531754.novalocal sshd-session[7663]: Connection closed by invalid user user 103.175.73.3 port 57732 [preauth]
Nov 22 04:49:01 np0005531754.novalocal sshd-session[7665]: Invalid user user from 103.175.73.3 port 58712
Nov 22 04:49:01 np0005531754.novalocal sshd-session[7665]: Connection closed by invalid user user 103.175.73.3 port 58712 [preauth]
Nov 22 04:49:03 np0005531754.novalocal sshd-session[7667]: Invalid user user from 103.175.73.3 port 59194
Nov 22 04:49:03 np0005531754.novalocal sshd-session[7667]: Connection closed by invalid user user 103.175.73.3 port 59194 [preauth]
Nov 22 04:49:04 np0005531754.novalocal sshd-session[7669]: Invalid user user from 103.175.73.3 port 60028
Nov 22 04:49:04 np0005531754.novalocal sshd-session[7669]: Connection closed by invalid user user 103.175.73.3 port 60028 [preauth]
Nov 22 04:49:05 np0005531754.novalocal sshd-session[7671]: Invalid user user from 103.175.73.3 port 60444
Nov 22 04:49:06 np0005531754.novalocal sshd-session[7671]: Connection closed by invalid user user 103.175.73.3 port 60444 [preauth]
Nov 22 04:49:07 np0005531754.novalocal sshd-session[7673]: Invalid user user from 103.175.73.3 port 33034
Nov 22 04:49:07 np0005531754.novalocal sshd-session[7673]: Connection closed by invalid user user 103.175.73.3 port 33034 [preauth]
Nov 22 04:49:08 np0005531754.novalocal sshd-session[7675]: Invalid user user from 103.175.73.3 port 33936
Nov 22 04:49:08 np0005531754.novalocal sshd-session[7675]: Connection closed by invalid user user 103.175.73.3 port 33936 [preauth]
Nov 22 04:49:10 np0005531754.novalocal sshd-session[7677]: Invalid user user from 103.175.73.3 port 34468
Nov 22 04:49:10 np0005531754.novalocal sshd-session[7677]: Connection closed by invalid user user 103.175.73.3 port 34468 [preauth]
Nov 22 04:49:11 np0005531754.novalocal sshd-session[7679]: Invalid user user from 103.175.73.3 port 35194
Nov 22 04:49:11 np0005531754.novalocal sshd-session[7679]: Connection closed by invalid user user 103.175.73.3 port 35194 [preauth]
Nov 22 04:49:12 np0005531754.novalocal sshd-session[7681]: Invalid user user from 103.175.73.3 port 36110
Nov 22 04:49:13 np0005531754.novalocal sshd-session[7681]: Connection closed by invalid user user 103.175.73.3 port 36110 [preauth]
Nov 22 04:49:14 np0005531754.novalocal sshd-session[7683]: Invalid user user from 103.175.73.3 port 36792
Nov 22 04:49:14 np0005531754.novalocal sshd-session[7683]: Connection closed by invalid user user 103.175.73.3 port 36792 [preauth]
Nov 22 04:49:15 np0005531754.novalocal sshd-session[7685]: Invalid user user from 103.175.73.3 port 37564
Nov 22 04:49:15 np0005531754.novalocal sshd-session[7685]: Connection closed by invalid user user 103.175.73.3 port 37564 [preauth]
Nov 22 04:49:16 np0005531754.novalocal sshd-session[7687]: Invalid user user from 103.175.73.3 port 38240
Nov 22 04:49:17 np0005531754.novalocal sshd-session[7687]: Connection closed by invalid user user 103.175.73.3 port 38240 [preauth]
Nov 22 04:49:18 np0005531754.novalocal sshd-session[7689]: Invalid user user from 103.175.73.3 port 38672
Nov 22 04:49:18 np0005531754.novalocal sshd-session[7689]: Connection closed by invalid user user 103.175.73.3 port 38672 [preauth]
Nov 22 04:49:19 np0005531754.novalocal sshd-session[7691]: Invalid user user from 103.175.73.3 port 39716
Nov 22 04:49:19 np0005531754.novalocal sshd-session[7691]: Connection closed by invalid user user 103.175.73.3 port 39716 [preauth]
Nov 22 04:49:21 np0005531754.novalocal sshd-session[7693]: Invalid user user from 103.175.73.3 port 40462
Nov 22 04:49:21 np0005531754.novalocal sshd-session[7693]: Connection closed by invalid user user 103.175.73.3 port 40462 [preauth]
Nov 22 04:49:22 np0005531754.novalocal sshd-session[7695]: Invalid user user from 103.175.73.3 port 40926
Nov 22 04:49:22 np0005531754.novalocal sshd-session[7695]: Connection closed by invalid user user 103.175.73.3 port 40926 [preauth]
Nov 22 04:49:23 np0005531754.novalocal sshd-session[7697]: Invalid user user from 103.175.73.3 port 41806
Nov 22 04:49:24 np0005531754.novalocal sshd-session[7697]: Connection closed by invalid user user 103.175.73.3 port 41806 [preauth]
Nov 22 04:49:25 np0005531754.novalocal sshd-session[7699]: Invalid user user from 103.175.73.3 port 42668
Nov 22 04:49:25 np0005531754.novalocal sshd-session[7699]: Connection closed by invalid user user 103.175.73.3 port 42668 [preauth]
Nov 22 04:49:26 np0005531754.novalocal sshd-session[7701]: Invalid user user from 103.175.73.3 port 43474
Nov 22 04:49:26 np0005531754.novalocal sshd-session[7701]: Connection closed by invalid user user 103.175.73.3 port 43474 [preauth]
Nov 22 04:49:27 np0005531754.novalocal sshd-session[7703]: Invalid user user from 103.175.73.3 port 44102
Nov 22 04:49:28 np0005531754.novalocal sshd-session[7703]: Connection closed by invalid user user 103.175.73.3 port 44102 [preauth]
Nov 22 04:49:29 np0005531754.novalocal sshd-session[7705]: Invalid user user from 103.175.73.3 port 44550
Nov 22 04:49:29 np0005531754.novalocal sshd-session[7705]: Connection closed by invalid user user 103.175.73.3 port 44550 [preauth]
Nov 22 04:49:30 np0005531754.novalocal sshd-session[7707]: Invalid user user from 103.175.73.3 port 45492
Nov 22 04:49:30 np0005531754.novalocal sshd-session[7707]: Connection closed by invalid user user 103.175.73.3 port 45492 [preauth]
Nov 22 04:49:32 np0005531754.novalocal sshd-session[7709]: Invalid user user from 103.175.73.3 port 46324
Nov 22 04:49:32 np0005531754.novalocal sshd-session[7709]: Connection closed by invalid user user 103.175.73.3 port 46324 [preauth]
Nov 22 04:49:33 np0005531754.novalocal sshd-session[7711]: Invalid user user from 103.175.73.3 port 46792
Nov 22 04:49:33 np0005531754.novalocal sshd-session[7711]: Connection closed by invalid user user 103.175.73.3 port 46792 [preauth]
Nov 22 04:49:34 np0005531754.novalocal sshd-session[7713]: Invalid user user from 103.175.73.3 port 47634
Nov 22 04:49:35 np0005531754.novalocal sshd-session[7713]: Connection closed by invalid user user 103.175.73.3 port 47634 [preauth]
Nov 22 04:49:36 np0005531754.novalocal sshd-session[7715]: Invalid user user from 103.175.73.3 port 48384
Nov 22 04:49:36 np0005531754.novalocal sshd-session[7715]: Connection closed by invalid user user 103.175.73.3 port 48384 [preauth]
Nov 22 04:49:37 np0005531754.novalocal sshd-session[7717]: Invalid user user from 103.175.73.3 port 49070
Nov 22 04:49:37 np0005531754.novalocal sshd-session[7717]: Connection closed by invalid user user 103.175.73.3 port 49070 [preauth]
Nov 22 04:49:39 np0005531754.novalocal sshd-session[7719]: Invalid user user from 103.175.73.3 port 50118
Nov 22 04:49:39 np0005531754.novalocal sshd-session[7719]: Connection closed by invalid user user 103.175.73.3 port 50118 [preauth]
Nov 22 04:49:40 np0005531754.novalocal sshd-session[7721]: Invalid user user from 103.175.73.3 port 50518
Nov 22 04:49:40 np0005531754.novalocal sshd-session[7721]: Connection closed by invalid user user 103.175.73.3 port 50518 [preauth]
Nov 22 04:49:41 np0005531754.novalocal sshd-session[7723]: Invalid user user from 103.175.73.3 port 51098
Nov 22 04:49:41 np0005531754.novalocal sshd-session[7723]: Connection closed by invalid user user 103.175.73.3 port 51098 [preauth]
Nov 22 04:49:43 np0005531754.novalocal sshd-session[7725]: Invalid user user from 103.175.73.3 port 51970
Nov 22 04:49:43 np0005531754.novalocal sshd-session[7725]: Connection closed by invalid user user 103.175.73.3 port 51970 [preauth]
Nov 22 04:49:44 np0005531754.novalocal sshd-session[7727]: Invalid user user from 103.175.73.3 port 52714
Nov 22 04:49:44 np0005531754.novalocal sshd-session[7727]: Connection closed by invalid user user 103.175.73.3 port 52714 [preauth]
Nov 22 04:49:45 np0005531754.novalocal sshd-session[7730]: Invalid user user from 103.175.73.3 port 53378
Nov 22 04:49:46 np0005531754.novalocal sshd-session[7730]: Connection closed by invalid user user 103.175.73.3 port 53378 [preauth]
Nov 22 04:49:47 np0005531754.novalocal sshd-session[7732]: Invalid user user from 103.175.73.3 port 54180
Nov 22 04:49:47 np0005531754.novalocal sshd-session[7732]: Connection closed by invalid user user 103.175.73.3 port 54180 [preauth]
Nov 22 04:49:48 np0005531754.novalocal sshd-session[7734]: Invalid user user from 103.175.73.3 port 54908
Nov 22 04:49:48 np0005531754.novalocal sshd-session[7734]: Connection closed by invalid user user 103.175.73.3 port 54908 [preauth]
Nov 22 04:49:49 np0005531754.novalocal sshd-session[7736]: Invalid user user from 103.175.73.3 port 55780
Nov 22 04:49:50 np0005531754.novalocal sshd-session[7736]: Connection closed by invalid user user 103.175.73.3 port 55780 [preauth]
Nov 22 04:49:51 np0005531754.novalocal sshd-session[7738]: Invalid user user from 103.175.73.3 port 56578
Nov 22 04:49:51 np0005531754.novalocal sshd-session[7738]: Connection closed by invalid user user 103.175.73.3 port 56578 [preauth]
Nov 22 04:49:52 np0005531754.novalocal sshd-session[7740]: Invalid user user from 103.175.73.3 port 57004
Nov 22 04:49:52 np0005531754.novalocal sshd-session[7740]: Connection closed by invalid user user 103.175.73.3 port 57004 [preauth]
Nov 22 04:49:54 np0005531754.novalocal sshd-session[7742]: Invalid user user from 103.175.73.3 port 57748
Nov 22 04:49:54 np0005531754.novalocal sshd-session[7742]: Connection closed by invalid user user 103.175.73.3 port 57748 [preauth]
Nov 22 04:49:55 np0005531754.novalocal sshd-session[7744]: Invalid user user from 103.175.73.3 port 58636
Nov 22 04:49:55 np0005531754.novalocal sshd-session[7744]: Connection closed by invalid user user 103.175.73.3 port 58636 [preauth]
Nov 22 04:49:56 np0005531754.novalocal sshd-session[7746]: Invalid user user from 103.175.73.3 port 59286
Nov 22 04:49:57 np0005531754.novalocal sshd-session[7746]: Connection closed by invalid user user 103.175.73.3 port 59286 [preauth]
Nov 22 04:49:57 np0005531754.novalocal sshd-session[7748]: Invalid user admin from 123.253.22.30 port 49546
Nov 22 04:49:58 np0005531754.novalocal sshd-session[7748]: Connection closed by invalid user admin 123.253.22.30 port 49546 [preauth]
Nov 22 04:49:58 np0005531754.novalocal sshd-session[7750]: Invalid user user from 103.175.73.3 port 60018
Nov 22 04:49:58 np0005531754.novalocal sshd-session[7750]: Connection closed by invalid user user 103.175.73.3 port 60018 [preauth]
Nov 22 04:49:59 np0005531754.novalocal sshd-session[7752]: Invalid user user from 103.175.73.3 port 60834
Nov 22 04:49:59 np0005531754.novalocal sshd-session[7752]: Connection closed by invalid user user 103.175.73.3 port 60834 [preauth]
Nov 22 04:50:00 np0005531754.novalocal sshd-session[7754]: Invalid user user from 103.175.73.3 port 33316
Nov 22 04:50:01 np0005531754.novalocal sshd-session[7754]: Connection closed by invalid user user 103.175.73.3 port 33316 [preauth]
Nov 22 04:50:02 np0005531754.novalocal sshd-session[7756]: Invalid user user from 103.175.73.3 port 34194
Nov 22 04:50:02 np0005531754.novalocal sshd-session[7756]: Connection closed by invalid user user 103.175.73.3 port 34194 [preauth]
Nov 22 04:50:03 np0005531754.novalocal sshd-session[7758]: Invalid user user from 103.175.73.3 port 34720
Nov 22 04:50:03 np0005531754.novalocal sshd-session[7758]: Connection closed by invalid user user 103.175.73.3 port 34720 [preauth]
Nov 22 04:50:05 np0005531754.novalocal sshd-session[7760]: Invalid user user from 103.175.73.3 port 35346
Nov 22 04:50:05 np0005531754.novalocal sshd-session[7760]: Connection closed by invalid user user 103.175.73.3 port 35346 [preauth]
Nov 22 04:50:07 np0005531754.novalocal sshd-session[7762]: Invalid user user from 103.175.73.3 port 36468
Nov 22 04:50:07 np0005531754.novalocal sshd-session[7762]: Connection closed by invalid user user 103.175.73.3 port 36468 [preauth]
Nov 22 04:50:08 np0005531754.novalocal sshd-session[7765]: Invalid user user from 103.175.73.3 port 37268
Nov 22 04:50:08 np0005531754.novalocal sshd-session[7765]: Connection closed by invalid user user 103.175.73.3 port 37268 [preauth]
Nov 22 04:50:09 np0005531754.novalocal sshd-session[7767]: Invalid user user from 103.175.73.3 port 38030
Nov 22 04:50:10 np0005531754.novalocal sshd-session[7767]: Connection closed by invalid user user 103.175.73.3 port 38030 [preauth]
Nov 22 04:50:11 np0005531754.novalocal sshd-session[7769]: Invalid user user from 103.175.73.3 port 38678
Nov 22 04:50:11 np0005531754.novalocal sshd-session[7769]: Connection closed by invalid user user 103.175.73.3 port 38678 [preauth]
Nov 22 04:50:12 np0005531754.novalocal sshd-session[7771]: Invalid user user from 103.175.73.3 port 39424
Nov 22 04:50:12 np0005531754.novalocal sshd-session[7771]: Connection closed by invalid user user 103.175.73.3 port 39424 [preauth]
Nov 22 04:50:14 np0005531754.novalocal sshd-session[7773]: Invalid user user from 103.175.73.3 port 40330
Nov 22 04:50:14 np0005531754.novalocal sshd-session[7773]: Connection closed by invalid user user 103.175.73.3 port 40330 [preauth]
Nov 22 04:50:15 np0005531754.novalocal sshd-session[7775]: Invalid user user from 103.175.73.3 port 40862
Nov 22 04:50:15 np0005531754.novalocal sshd-session[7775]: Connection closed by invalid user user 103.175.73.3 port 40862 [preauth]
Nov 22 04:50:16 np0005531754.novalocal sshd-session[7777]: Invalid user user from 103.175.73.3 port 41476
Nov 22 04:50:17 np0005531754.novalocal sshd-session[7777]: Connection closed by invalid user user 103.175.73.3 port 41476 [preauth]
Nov 22 04:50:18 np0005531754.novalocal sshd-session[7779]: Invalid user user from 103.175.73.3 port 42390
Nov 22 04:50:18 np0005531754.novalocal sshd-session[7779]: Connection closed by invalid user user 103.175.73.3 port 42390 [preauth]
Nov 22 04:50:19 np0005531754.novalocal sshd-session[7781]: Invalid user user from 103.175.73.3 port 43086
Nov 22 04:50:19 np0005531754.novalocal sshd-session[7781]: Connection closed by invalid user user 103.175.73.3 port 43086 [preauth]
Nov 22 04:50:20 np0005531754.novalocal sshd-session[7783]: Invalid user user from 103.175.73.3 port 43960
Nov 22 04:50:21 np0005531754.novalocal sshd-session[7783]: Connection closed by invalid user user 103.175.73.3 port 43960 [preauth]
Nov 22 04:50:22 np0005531754.novalocal sshd-session[7785]: Invalid user user from 103.175.73.3 port 44680
Nov 22 04:50:22 np0005531754.novalocal sshd-session[7785]: Connection closed by invalid user user 103.175.73.3 port 44680 [preauth]
Nov 22 04:50:23 np0005531754.novalocal sshd-session[7787]: Invalid user user from 103.175.73.3 port 45230
Nov 22 04:50:23 np0005531754.novalocal sshd-session[7787]: Connection closed by invalid user user 103.175.73.3 port 45230 [preauth]
Nov 22 04:50:25 np0005531754.novalocal sshd-session[7789]: Invalid user user from 103.175.73.3 port 46182
Nov 22 04:50:25 np0005531754.novalocal sshd-session[7789]: Connection closed by invalid user user 103.175.73.3 port 46182 [preauth]
Nov 22 04:50:26 np0005531754.novalocal sshd-session[7791]: Invalid user user from 103.175.73.3 port 46866
Nov 22 04:50:26 np0005531754.novalocal sshd-session[7791]: Connection closed by invalid user user 103.175.73.3 port 46866 [preauth]
Nov 22 04:50:27 np0005531754.novalocal sshd-session[7793]: Invalid user user from 103.175.73.3 port 47428
Nov 22 04:50:28 np0005531754.novalocal sshd-session[7793]: Connection closed by invalid user user 103.175.73.3 port 47428 [preauth]
Nov 22 04:50:29 np0005531754.novalocal sshd-session[7795]: Invalid user user from 103.175.73.3 port 48126
Nov 22 04:50:29 np0005531754.novalocal sshd-session[7795]: Connection closed by invalid user user 103.175.73.3 port 48126 [preauth]
Nov 22 04:50:30 np0005531754.novalocal sshd-session[7797]: Invalid user user from 103.175.73.3 port 48890
Nov 22 04:50:30 np0005531754.novalocal sshd-session[7797]: Connection closed by invalid user user 103.175.73.3 port 48890 [preauth]
Nov 22 04:50:32 np0005531754.novalocal sshd-session[7799]: Invalid user ubuntu from 103.175.73.3 port 49826
Nov 22 04:50:32 np0005531754.novalocal sshd-session[7799]: Connection closed by invalid user ubuntu 103.175.73.3 port 49826 [preauth]
Nov 22 04:50:33 np0005531754.novalocal sshd-session[7801]: Invalid user ubuntu from 103.175.73.3 port 50640
Nov 22 04:50:33 np0005531754.novalocal sshd-session[7801]: Connection closed by invalid user ubuntu 103.175.73.3 port 50640 [preauth]
Nov 22 04:50:35 np0005531754.novalocal sshd-session[7803]: Invalid user ubuntu from 103.175.73.3 port 51222
Nov 22 04:50:35 np0005531754.novalocal sshd-session[7803]: Connection closed by invalid user ubuntu 103.175.73.3 port 51222 [preauth]
Nov 22 04:50:36 np0005531754.novalocal sshd-session[7805]: Invalid user ubuntu from 103.175.73.3 port 51818
Nov 22 04:50:36 np0005531754.novalocal sshd-session[7805]: Connection closed by invalid user ubuntu 103.175.73.3 port 51818 [preauth]
Nov 22 04:50:37 np0005531754.novalocal sshd-session[7807]: Invalid user ubuntu from 103.175.73.3 port 52758
Nov 22 04:50:37 np0005531754.novalocal sshd-session[7807]: Connection closed by invalid user ubuntu 103.175.73.3 port 52758 [preauth]
Nov 22 04:50:39 np0005531754.novalocal sshd-session[7809]: Invalid user ubuntu from 103.175.73.3 port 53404
Nov 22 04:50:39 np0005531754.novalocal sshd-session[7809]: Connection closed by invalid user ubuntu 103.175.73.3 port 53404 [preauth]
Nov 22 04:50:40 np0005531754.novalocal sshd-session[7811]: Invalid user ubuntu from 103.175.73.3 port 54122
Nov 22 04:50:40 np0005531754.novalocal sshd-session[7811]: Connection closed by invalid user ubuntu 103.175.73.3 port 54122 [preauth]
Nov 22 04:50:41 np0005531754.novalocal sshd-session[7813]: Invalid user ubuntu from 103.175.73.3 port 54874
Nov 22 04:50:42 np0005531754.novalocal sshd-session[7813]: Connection closed by invalid user ubuntu 103.175.73.3 port 54874 [preauth]
Nov 22 04:50:43 np0005531754.novalocal sshd-session[7815]: Invalid user ubuntu from 103.175.73.3 port 55620
Nov 22 04:50:43 np0005531754.novalocal sshd-session[7815]: Connection closed by invalid user ubuntu 103.175.73.3 port 55620 [preauth]
Nov 22 04:50:44 np0005531754.novalocal sshd-session[7817]: Invalid user ubuntu from 103.175.73.3 port 56742
Nov 22 04:50:44 np0005531754.novalocal sshd-session[7817]: Connection closed by invalid user ubuntu 103.175.73.3 port 56742 [preauth]
Nov 22 04:50:46 np0005531754.novalocal sshd-session[7819]: Invalid user ubuntu from 103.175.73.3 port 57818
Nov 22 04:50:46 np0005531754.novalocal sshd-session[7819]: Connection closed by invalid user ubuntu 103.175.73.3 port 57818 [preauth]
Nov 22 04:50:47 np0005531754.novalocal sshd-session[7821]: Invalid user ubuntu from 103.175.73.3 port 58506
Nov 22 04:50:47 np0005531754.novalocal sshd-session[7821]: Connection closed by invalid user ubuntu 103.175.73.3 port 58506 [preauth]
Nov 22 04:50:48 np0005531754.novalocal sshd-session[7823]: Invalid user ubuntu from 103.175.73.3 port 59260
Nov 22 04:50:48 np0005531754.novalocal sshd-session[7823]: Connection closed by invalid user ubuntu 103.175.73.3 port 59260 [preauth]
Nov 22 04:50:50 np0005531754.novalocal sshd-session[7825]: Invalid user ubuntu from 103.175.73.3 port 60094
Nov 22 04:50:50 np0005531754.novalocal sshd-session[7825]: Connection closed by invalid user ubuntu 103.175.73.3 port 60094 [preauth]
Nov 22 04:50:51 np0005531754.novalocal sshd-session[7827]: Invalid user ubuntu from 103.175.73.3 port 60672
Nov 22 04:50:51 np0005531754.novalocal sshd-session[7827]: Connection closed by invalid user ubuntu 103.175.73.3 port 60672 [preauth]
Nov 22 04:50:52 np0005531754.novalocal sshd-session[7829]: Invalid user ubuntu from 103.175.73.3 port 33262
Nov 22 04:50:53 np0005531754.novalocal sshd-session[7829]: Connection closed by invalid user ubuntu 103.175.73.3 port 33262 [preauth]
Nov 22 04:50:54 np0005531754.novalocal sshd-session[7831]: Invalid user ubuntu from 103.175.73.3 port 33898
Nov 22 04:50:54 np0005531754.novalocal sshd-session[7831]: Connection closed by invalid user ubuntu 103.175.73.3 port 33898 [preauth]
Nov 22 04:50:55 np0005531754.novalocal sshd-session[7833]: Invalid user ubuntu from 103.175.73.3 port 34752
Nov 22 04:50:55 np0005531754.novalocal sshd-session[7833]: Connection closed by invalid user ubuntu 103.175.73.3 port 34752 [preauth]
Nov 22 04:50:56 np0005531754.novalocal sshd-session[7835]: Invalid user ubuntu from 103.175.73.3 port 35608
Nov 22 04:50:57 np0005531754.novalocal sshd-session[7835]: Connection closed by invalid user ubuntu 103.175.73.3 port 35608 [preauth]
Nov 22 04:50:58 np0005531754.novalocal sshd-session[7837]: Invalid user ubuntu from 103.175.73.3 port 36044
Nov 22 04:50:58 np0005531754.novalocal sshd-session[7837]: Connection closed by invalid user ubuntu 103.175.73.3 port 36044 [preauth]
Nov 22 04:50:59 np0005531754.novalocal sshd-session[7839]: Invalid user ubuntu from 103.175.73.3 port 36680
Nov 22 04:50:59 np0005531754.novalocal sshd-session[7839]: Connection closed by invalid user ubuntu 103.175.73.3 port 36680 [preauth]
Nov 22 04:51:01 np0005531754.novalocal sshd-session[7841]: Invalid user ubuntu from 103.175.73.3 port 37548
Nov 22 04:51:01 np0005531754.novalocal sshd-session[7841]: Connection closed by invalid user ubuntu 103.175.73.3 port 37548 [preauth]
Nov 22 04:51:02 np0005531754.novalocal sshd-session[7843]: Invalid user ubuntu from 103.175.73.3 port 38262
Nov 22 04:51:02 np0005531754.novalocal sshd-session[7843]: Connection closed by invalid user ubuntu 103.175.73.3 port 38262 [preauth]
Nov 22 04:51:03 np0005531754.novalocal sshd-session[7845]: Invalid user ubuntu from 103.175.73.3 port 38930
Nov 22 04:51:03 np0005531754.novalocal sshd-session[7845]: Connection closed by invalid user ubuntu 103.175.73.3 port 38930 [preauth]
Nov 22 04:51:05 np0005531754.novalocal sshd-session[7847]: Invalid user ubuntu from 103.175.73.3 port 39610
Nov 22 04:51:05 np0005531754.novalocal sshd-session[7847]: Connection closed by invalid user ubuntu 103.175.73.3 port 39610 [preauth]
Nov 22 04:51:06 np0005531754.novalocal sshd-session[7849]: Invalid user ubuntu from 103.175.73.3 port 40196
Nov 22 04:51:06 np0005531754.novalocal sshd-session[7849]: Connection closed by invalid user ubuntu 103.175.73.3 port 40196 [preauth]
Nov 22 04:51:07 np0005531754.novalocal sshd-session[7851]: Invalid user ubuntu from 103.175.73.3 port 41030
Nov 22 04:51:08 np0005531754.novalocal sshd-session[7851]: Connection closed by invalid user ubuntu 103.175.73.3 port 41030 [preauth]
Nov 22 04:51:09 np0005531754.novalocal sshd-session[7853]: Invalid user ubuntu from 103.175.73.3 port 41736
Nov 22 04:51:09 np0005531754.novalocal sshd-session[7853]: Connection closed by invalid user ubuntu 103.175.73.3 port 41736 [preauth]
Nov 22 04:51:10 np0005531754.novalocal sshd-session[7855]: Invalid user ubuntu from 103.175.73.3 port 42294
Nov 22 04:51:10 np0005531754.novalocal sshd-session[7855]: Connection closed by invalid user ubuntu 103.175.73.3 port 42294 [preauth]
Nov 22 04:51:11 np0005531754.novalocal sshd-session[7857]: Invalid user ubuntu from 103.175.73.3 port 43038
Nov 22 04:51:12 np0005531754.novalocal sshd-session[7857]: Connection closed by invalid user ubuntu 103.175.73.3 port 43038 [preauth]
Nov 22 04:51:13 np0005531754.novalocal sshd-session[7861]: Connection closed by 80.94.92.166 port 60672
Nov 22 04:51:13 np0005531754.novalocal sshd-session[7859]: Invalid user ubuntu from 103.175.73.3 port 43752
Nov 22 04:51:13 np0005531754.novalocal sshd-session[7859]: Connection closed by invalid user ubuntu 103.175.73.3 port 43752 [preauth]
Nov 22 04:51:14 np0005531754.novalocal sshd-session[7862]: Invalid user ubuntu from 103.175.73.3 port 44532
Nov 22 04:51:14 np0005531754.novalocal sshd-session[7862]: Connection closed by invalid user ubuntu 103.175.73.3 port 44532 [preauth]
Nov 22 04:51:16 np0005531754.novalocal sshd-session[7864]: Invalid user ubuntu from 103.175.73.3 port 45256
Nov 22 04:51:16 np0005531754.novalocal sshd-session[7864]: Connection closed by invalid user ubuntu 103.175.73.3 port 45256 [preauth]
Nov 22 04:51:17 np0005531754.novalocal sshd-session[7866]: Invalid user ubuntu from 103.175.73.3 port 45848
Nov 22 04:51:17 np0005531754.novalocal sshd-session[7866]: Connection closed by invalid user ubuntu 103.175.73.3 port 45848 [preauth]
Nov 22 04:51:18 np0005531754.novalocal sshd-session[7868]: Invalid user ubuntu from 103.175.73.3 port 46708
Nov 22 04:51:19 np0005531754.novalocal sshd-session[7868]: Connection closed by invalid user ubuntu 103.175.73.3 port 46708 [preauth]
Nov 22 04:51:20 np0005531754.novalocal sshd-session[7870]: Invalid user ubuntu from 103.175.73.3 port 47386
Nov 22 04:51:20 np0005531754.novalocal sshd-session[7870]: Connection closed by invalid user ubuntu 103.175.73.3 port 47386 [preauth]
Nov 22 04:51:21 np0005531754.novalocal sshd-session[7872]: Invalid user ubuntu from 103.175.73.3 port 47936
Nov 22 04:51:21 np0005531754.novalocal sshd-session[7872]: Connection closed by invalid user ubuntu 103.175.73.3 port 47936 [preauth]
Nov 22 04:51:22 np0005531754.novalocal sshd-session[7874]: Invalid user ubuntu from 103.175.73.3 port 48628
Nov 22 04:51:23 np0005531754.novalocal sshd-session[7874]: Connection closed by invalid user ubuntu 103.175.73.3 port 48628 [preauth]
Nov 22 04:51:24 np0005531754.novalocal sshd-session[7876]: Invalid user ubuntu from 103.175.73.3 port 49354
Nov 22 04:51:24 np0005531754.novalocal sshd-session[7876]: Connection closed by invalid user ubuntu 103.175.73.3 port 49354 [preauth]
Nov 22 04:51:25 np0005531754.novalocal sshd-session[7878]: Invalid user ubuntu from 103.175.73.3 port 50038
Nov 22 04:51:25 np0005531754.novalocal sshd-session[7878]: Connection closed by invalid user ubuntu 103.175.73.3 port 50038 [preauth]
Nov 22 04:51:27 np0005531754.novalocal sshd-session[7880]: Invalid user ubuntu from 103.175.73.3 port 50924
Nov 22 04:51:27 np0005531754.novalocal sshd-session[7880]: Connection closed by invalid user ubuntu 103.175.73.3 port 50924 [preauth]
Nov 22 04:51:28 np0005531754.novalocal sshd-session[7882]: Invalid user ubuntu from 103.175.73.3 port 51650
Nov 22 04:51:28 np0005531754.novalocal sshd-session[7882]: Connection closed by invalid user ubuntu 103.175.73.3 port 51650 [preauth]
Nov 22 04:51:29 np0005531754.novalocal sshd-session[7884]: Invalid user ubuntu from 103.175.73.3 port 52542
Nov 22 04:51:29 np0005531754.novalocal sshd-session[7884]: Connection closed by invalid user ubuntu 103.175.73.3 port 52542 [preauth]
Nov 22 04:51:31 np0005531754.novalocal sshd-session[7886]: Invalid user ubuntu from 103.175.73.3 port 53414
Nov 22 04:51:31 np0005531754.novalocal sshd-session[7886]: Connection closed by invalid user ubuntu 103.175.73.3 port 53414 [preauth]
Nov 22 04:51:32 np0005531754.novalocal sshd-session[7888]: Invalid user ubuntu from 103.175.73.3 port 54054
Nov 22 04:51:32 np0005531754.novalocal sshd-session[7888]: Connection closed by invalid user ubuntu 103.175.73.3 port 54054 [preauth]
Nov 22 04:51:33 np0005531754.novalocal sshd-session[7890]: Invalid user ubuntu from 103.175.73.3 port 54648
Nov 22 04:51:34 np0005531754.novalocal sshd-session[7890]: Connection closed by invalid user ubuntu 103.175.73.3 port 54648 [preauth]
Nov 22 04:51:35 np0005531754.novalocal sshd-session[7892]: Invalid user ubuntu from 103.175.73.3 port 55312
Nov 22 04:51:35 np0005531754.novalocal sshd-session[7892]: Connection closed by invalid user ubuntu 103.175.73.3 port 55312 [preauth]
Nov 22 04:51:36 np0005531754.novalocal sshd-session[7894]: Invalid user ubuntu from 103.175.73.3 port 56130
Nov 22 04:51:36 np0005531754.novalocal sshd-session[7894]: Connection closed by invalid user ubuntu 103.175.73.3 port 56130 [preauth]
Nov 22 04:51:37 np0005531754.novalocal sshd-session[7896]: Invalid user ubuntu from 103.175.73.3 port 56884
Nov 22 04:51:38 np0005531754.novalocal sshd-session[7896]: Connection closed by invalid user ubuntu 103.175.73.3 port 56884 [preauth]
Nov 22 04:51:39 np0005531754.novalocal sshd-session[7898]: Invalid user ubuntu from 103.175.73.3 port 57698
Nov 22 04:51:39 np0005531754.novalocal sshd-session[7898]: Connection closed by invalid user ubuntu 103.175.73.3 port 57698 [preauth]
Nov 22 04:51:40 np0005531754.novalocal sshd-session[7900]: Invalid user ubuntu from 103.175.73.3 port 58230
Nov 22 04:51:40 np0005531754.novalocal sshd-session[7900]: Connection closed by invalid user ubuntu 103.175.73.3 port 58230 [preauth]
Nov 22 04:51:42 np0005531754.novalocal sshd-session[7902]: Invalid user ubuntu from 103.175.73.3 port 58834
Nov 22 04:51:42 np0005531754.novalocal sshd-session[7902]: Connection closed by invalid user ubuntu 103.175.73.3 port 58834 [preauth]
Nov 22 04:51:43 np0005531754.novalocal sshd-session[7904]: Invalid user ubuntu from 103.175.73.3 port 59616
Nov 22 04:51:43 np0005531754.novalocal sshd-session[7904]: Connection closed by invalid user ubuntu 103.175.73.3 port 59616 [preauth]
Nov 22 04:51:44 np0005531754.novalocal sshd-session[7906]: Invalid user ubuntu from 103.175.73.3 port 60366
Nov 22 04:51:45 np0005531754.novalocal sshd-session[7906]: Connection closed by invalid user ubuntu 103.175.73.3 port 60366 [preauth]
Nov 22 04:51:46 np0005531754.novalocal sshd-session[7908]: Invalid user ubuntu from 103.175.73.3 port 32780
Nov 22 04:51:46 np0005531754.novalocal sshd-session[7908]: Connection closed by invalid user ubuntu 103.175.73.3 port 32780 [preauth]
Nov 22 04:51:47 np0005531754.novalocal sshd-session[7910]: Invalid user ubuntu from 103.175.73.3 port 33520
Nov 22 04:51:47 np0005531754.novalocal sshd-session[7910]: Connection closed by invalid user ubuntu 103.175.73.3 port 33520 [preauth]
Nov 22 04:51:48 np0005531754.novalocal sshd-session[7912]: Invalid user ubuntu from 103.175.73.3 port 34226
Nov 22 04:51:49 np0005531754.novalocal sshd-session[7912]: Connection closed by invalid user ubuntu 103.175.73.3 port 34226 [preauth]
Nov 22 04:51:50 np0005531754.novalocal sshd-session[7914]: Invalid user ubuntu from 103.175.73.3 port 35084
Nov 22 04:51:51 np0005531754.novalocal sshd-session[7914]: Connection closed by invalid user ubuntu 103.175.73.3 port 35084 [preauth]
Nov 22 04:51:52 np0005531754.novalocal sshd-session[7917]: Invalid user ubuntu from 103.175.73.3 port 35970
Nov 22 04:51:52 np0005531754.novalocal sshd-session[7917]: Connection closed by invalid user ubuntu 103.175.73.3 port 35970 [preauth]
Nov 22 04:51:53 np0005531754.novalocal sshd-session[7919]: Invalid user ubuntu from 103.175.73.3 port 36460
Nov 22 04:51:53 np0005531754.novalocal sshd-session[7919]: Connection closed by invalid user ubuntu 103.175.73.3 port 36460 [preauth]
Nov 22 04:51:54 np0005531754.novalocal sshd-session[7921]: Invalid user ubuntu from 103.175.73.3 port 37120
Nov 22 04:51:55 np0005531754.novalocal sshd-session[7921]: Connection closed by invalid user ubuntu 103.175.73.3 port 37120 [preauth]
Nov 22 04:51:56 np0005531754.novalocal sshd-session[7923]: Invalid user ubuntu from 103.175.73.3 port 37984
Nov 22 04:51:56 np0005531754.novalocal sshd-session[7923]: Connection closed by invalid user ubuntu 103.175.73.3 port 37984 [preauth]
Nov 22 04:51:57 np0005531754.novalocal sshd-session[7925]: Invalid user ubuntu from 103.175.73.3 port 39170
Nov 22 04:51:57 np0005531754.novalocal sshd-session[7925]: Connection closed by invalid user ubuntu 103.175.73.3 port 39170 [preauth]
Nov 22 04:51:59 np0005531754.novalocal sshd-session[7927]: Invalid user ubuntu from 103.175.73.3 port 40128
Nov 22 04:51:59 np0005531754.novalocal sshd-session[7927]: Connection closed by invalid user ubuntu 103.175.73.3 port 40128 [preauth]
Nov 22 04:52:00 np0005531754.novalocal sshd-session[7929]: Invalid user ubuntu from 103.175.73.3 port 40702
Nov 22 04:52:00 np0005531754.novalocal sshd-session[7929]: Connection closed by invalid user ubuntu 103.175.73.3 port 40702 [preauth]
Nov 22 04:52:01 np0005531754.novalocal sshd-session[7931]: Invalid user ubuntu from 103.175.73.3 port 41460
Nov 22 04:52:02 np0005531754.novalocal sshd-session[7931]: Connection closed by invalid user ubuntu 103.175.73.3 port 41460 [preauth]
Nov 22 04:52:03 np0005531754.novalocal sshd-session[7933]: Invalid user ubuntu from 103.175.73.3 port 42126
Nov 22 04:52:03 np0005531754.novalocal sshd-session[7933]: Connection closed by invalid user ubuntu 103.175.73.3 port 42126 [preauth]
Nov 22 04:52:04 np0005531754.novalocal sshd-session[7935]: Invalid user ubuntu from 103.175.73.3 port 42778
Nov 22 04:52:04 np0005531754.novalocal sshd-session[7935]: Connection closed by invalid user ubuntu 103.175.73.3 port 42778 [preauth]
Nov 22 04:52:05 np0005531754.novalocal sshd-session[7937]: Invalid user ubuntu from 103.175.73.3 port 43400
Nov 22 04:52:06 np0005531754.novalocal sshd-session[7937]: Connection closed by invalid user ubuntu 103.175.73.3 port 43400 [preauth]
Nov 22 04:52:07 np0005531754.novalocal sshd-session[7939]: Invalid user ubuntu from 103.175.73.3 port 44032
Nov 22 04:52:07 np0005531754.novalocal sshd-session[7939]: Connection closed by invalid user ubuntu 103.175.73.3 port 44032 [preauth]
Nov 22 04:52:08 np0005531754.novalocal sshd-session[7941]: Invalid user ubuntu from 103.175.73.3 port 44800
Nov 22 04:52:08 np0005531754.novalocal sshd-session[7941]: Connection closed by invalid user ubuntu 103.175.73.3 port 44800 [preauth]
Nov 22 04:52:10 np0005531754.novalocal sshd-session[7943]: Invalid user ubuntu from 103.175.73.3 port 45540
Nov 22 04:52:10 np0005531754.novalocal sshd-session[7943]: Connection closed by invalid user ubuntu 103.175.73.3 port 45540 [preauth]
Nov 22 04:52:11 np0005531754.novalocal sshd-session[7945]: Invalid user ubuntu from 103.175.73.3 port 46112
Nov 22 04:52:11 np0005531754.novalocal sshd-session[7945]: Connection closed by invalid user ubuntu 103.175.73.3 port 46112 [preauth]
Nov 22 04:52:12 np0005531754.novalocal sshd-session[7947]: Invalid user ubuntu from 103.175.73.3 port 46750
Nov 22 04:52:13 np0005531754.novalocal sshd-session[7947]: Connection closed by invalid user ubuntu 103.175.73.3 port 46750 [preauth]
Nov 22 04:52:14 np0005531754.novalocal sshd-session[7949]: Invalid user ubuntu from 103.175.73.3 port 47512
Nov 22 04:52:14 np0005531754.novalocal sshd-session[7949]: Connection closed by invalid user ubuntu 103.175.73.3 port 47512 [preauth]
Nov 22 04:52:15 np0005531754.novalocal sshd-session[7951]: Invalid user ubuntu from 103.175.73.3 port 48172
Nov 22 04:52:15 np0005531754.novalocal sshd-session[7951]: Connection closed by invalid user ubuntu 103.175.73.3 port 48172 [preauth]
Nov 22 04:52:16 np0005531754.novalocal sshd-session[7953]: Invalid user ubuntu from 103.175.73.3 port 48778
Nov 22 04:52:17 np0005531754.novalocal sshd-session[7953]: Connection closed by invalid user ubuntu 103.175.73.3 port 48778 [preauth]
Nov 22 04:52:18 np0005531754.novalocal sshd-session[7955]: Invalid user ubuntu from 103.175.73.3 port 49404
Nov 22 04:52:18 np0005531754.novalocal sshd-session[7955]: Connection closed by invalid user ubuntu 103.175.73.3 port 49404 [preauth]
Nov 22 04:52:19 np0005531754.novalocal sshd-session[7957]: Invalid user ubuntu from 103.175.73.3 port 50062
Nov 22 04:52:19 np0005531754.novalocal sshd-session[7957]: Connection closed by invalid user ubuntu 103.175.73.3 port 50062 [preauth]
Nov 22 04:52:21 np0005531754.novalocal sshd-session[7959]: Invalid user ubuntu from 103.175.73.3 port 50802
Nov 22 04:52:21 np0005531754.novalocal sshd-session[7959]: Connection closed by invalid user ubuntu 103.175.73.3 port 50802 [preauth]
Nov 22 04:52:22 np0005531754.novalocal sshd-session[7961]: Invalid user ubuntu from 103.175.73.3 port 51514
Nov 22 04:52:22 np0005531754.novalocal sshd-session[7961]: Connection closed by invalid user ubuntu 103.175.73.3 port 51514 [preauth]
Nov 22 04:52:23 np0005531754.novalocal sshd-session[7963]: Invalid user ubuntu from 103.175.73.3 port 52116
Nov 22 04:52:24 np0005531754.novalocal sshd-session[7963]: Connection closed by invalid user ubuntu 103.175.73.3 port 52116 [preauth]
Nov 22 04:52:25 np0005531754.novalocal sshd-session[7965]: Invalid user debian from 103.175.73.3 port 52878
Nov 22 04:52:25 np0005531754.novalocal sshd-session[7965]: Connection closed by invalid user debian 103.175.73.3 port 52878 [preauth]
Nov 22 04:52:26 np0005531754.novalocal sshd-session[7967]: Invalid user debian from 103.175.73.3 port 53546
Nov 22 04:52:27 np0005531754.novalocal sshd-session[7967]: Connection closed by invalid user debian 103.175.73.3 port 53546 [preauth]
Nov 22 04:52:28 np0005531754.novalocal sshd-session[7969]: Invalid user debian from 103.175.73.3 port 54244
Nov 22 04:52:28 np0005531754.novalocal sshd-session[7969]: Connection closed by invalid user debian 103.175.73.3 port 54244 [preauth]
Nov 22 04:52:29 np0005531754.novalocal sshd-session[7971]: Invalid user debian from 103.175.73.3 port 54830
Nov 22 04:52:29 np0005531754.novalocal sshd-session[7971]: Connection closed by invalid user debian 103.175.73.3 port 54830 [preauth]
Nov 22 04:52:31 np0005531754.novalocal sshd-session[7973]: Invalid user debian from 103.175.73.3 port 55508
Nov 22 04:52:31 np0005531754.novalocal sshd-session[7973]: Connection closed by invalid user debian 103.175.73.3 port 55508 [preauth]
Nov 22 04:52:32 np0005531754.novalocal sshd-session[7975]: Invalid user debian from 103.175.73.3 port 56288
Nov 22 04:52:32 np0005531754.novalocal sshd-session[7975]: Connection closed by invalid user debian 103.175.73.3 port 56288 [preauth]
Nov 22 04:52:33 np0005531754.novalocal sshd-session[7977]: Invalid user debian from 103.175.73.3 port 56958
Nov 22 04:52:34 np0005531754.novalocal sshd-session[7977]: Connection closed by invalid user debian 103.175.73.3 port 56958 [preauth]
Nov 22 04:52:35 np0005531754.novalocal sshd-session[7979]: Invalid user debian from 103.175.73.3 port 57548
Nov 22 04:52:35 np0005531754.novalocal sshd-session[7979]: Connection closed by invalid user debian 103.175.73.3 port 57548 [preauth]
Nov 22 04:52:36 np0005531754.novalocal sshd-session[7981]: Invalid user debian from 103.175.73.3 port 58322
Nov 22 04:52:36 np0005531754.novalocal sshd-session[7981]: Connection closed by invalid user debian 103.175.73.3 port 58322 [preauth]
Nov 22 04:52:38 np0005531754.novalocal sshd-session[7983]: Invalid user debian from 103.175.73.3 port 58920
Nov 22 04:52:38 np0005531754.novalocal sshd-session[7983]: Connection closed by invalid user debian 103.175.73.3 port 58920 [preauth]
Nov 22 04:52:39 np0005531754.novalocal sshd-session[7985]: Invalid user debian from 103.175.73.3 port 59606
Nov 22 04:52:39 np0005531754.novalocal sshd-session[7985]: Connection closed by invalid user debian 103.175.73.3 port 59606 [preauth]
Nov 22 04:52:40 np0005531754.novalocal sshd-session[7987]: Invalid user debian from 103.175.73.3 port 60376
Nov 22 04:52:40 np0005531754.novalocal sshd-session[7987]: Connection closed by invalid user debian 103.175.73.3 port 60376 [preauth]
Nov 22 04:52:42 np0005531754.novalocal sshd-session[7989]: Invalid user debian from 103.175.73.3 port 60932
Nov 22 04:52:42 np0005531754.novalocal sshd-session[7989]: Connection closed by invalid user debian 103.175.73.3 port 60932 [preauth]
Nov 22 04:52:43 np0005531754.novalocal sshd-session[7991]: Invalid user debian from 103.175.73.3 port 33340
Nov 22 04:52:43 np0005531754.novalocal sshd-session[7991]: Connection closed by invalid user debian 103.175.73.3 port 33340 [preauth]
Nov 22 04:52:44 np0005531754.novalocal sshd-session[7993]: Invalid user debian from 103.175.73.3 port 34152
Nov 22 04:52:45 np0005531754.novalocal sshd-session[7993]: Connection closed by invalid user debian 103.175.73.3 port 34152 [preauth]
Nov 22 04:52:46 np0005531754.novalocal sshd-session[7995]: Invalid user debian from 103.175.73.3 port 34744
Nov 22 04:52:46 np0005531754.novalocal sshd-session[7995]: Connection closed by invalid user debian 103.175.73.3 port 34744 [preauth]
Nov 22 04:52:47 np0005531754.novalocal sshd-session[7997]: Invalid user debian from 103.175.73.3 port 35416
Nov 22 04:52:47 np0005531754.novalocal sshd-session[7997]: Connection closed by invalid user debian 103.175.73.3 port 35416 [preauth]
Nov 22 04:52:48 np0005531754.novalocal sshd-session[7999]: Invalid user debian from 103.175.73.3 port 36122
Nov 22 04:52:49 np0005531754.novalocal sshd-session[7999]: Connection closed by invalid user debian 103.175.73.3 port 36122 [preauth]
Nov 22 04:52:50 np0005531754.novalocal sshd-session[8001]: Invalid user debian from 103.175.73.3 port 36798
Nov 22 04:52:50 np0005531754.novalocal sshd-session[8001]: Connection closed by invalid user debian 103.175.73.3 port 36798 [preauth]
Nov 22 04:52:51 np0005531754.novalocal sshd-session[8003]: Invalid user debian from 103.175.73.3 port 37572
Nov 22 04:52:51 np0005531754.novalocal sshd-session[8003]: Connection closed by invalid user debian 103.175.73.3 port 37572 [preauth]
Nov 22 04:52:53 np0005531754.novalocal sshd-session[8005]: Invalid user debian from 103.175.73.3 port 38230
Nov 22 04:52:53 np0005531754.novalocal sshd-session[8005]: Connection closed by invalid user debian 103.175.73.3 port 38230 [preauth]
Nov 22 04:52:54 np0005531754.novalocal sshd-session[8007]: Invalid user debian from 103.175.73.3 port 38816
Nov 22 04:52:54 np0005531754.novalocal sshd-session[8007]: Connection closed by invalid user debian 103.175.73.3 port 38816 [preauth]
Nov 22 04:52:55 np0005531754.novalocal sshd-session[8009]: Invalid user debian from 103.175.73.3 port 39486
Nov 22 04:52:56 np0005531754.novalocal sshd-session[8009]: Connection closed by invalid user debian 103.175.73.3 port 39486 [preauth]
Nov 22 04:52:57 np0005531754.novalocal sshd-session[8011]: Invalid user debian from 103.175.73.3 port 40246
Nov 22 04:52:57 np0005531754.novalocal sshd-session[8011]: Connection closed by invalid user debian 103.175.73.3 port 40246 [preauth]
Nov 22 04:52:58 np0005531754.novalocal sshd-session[8013]: Invalid user debian from 103.175.73.3 port 40982
Nov 22 04:52:58 np0005531754.novalocal sshd-session[8013]: Connection closed by invalid user debian 103.175.73.3 port 40982 [preauth]
Nov 22 04:52:59 np0005531754.novalocal sshd-session[8015]: Invalid user debian from 103.175.73.3 port 41656
Nov 22 04:53:00 np0005531754.novalocal sshd-session[8015]: Connection closed by invalid user debian 103.175.73.3 port 41656 [preauth]
Nov 22 04:53:01 np0005531754.novalocal sshd-session[8017]: Invalid user debian from 103.175.73.3 port 42152
Nov 22 04:53:01 np0005531754.novalocal sshd-session[8017]: Connection closed by invalid user debian 103.175.73.3 port 42152 [preauth]
Nov 22 04:53:02 np0005531754.novalocal sshd-session[8019]: Invalid user debian from 103.175.73.3 port 42980
Nov 22 04:53:03 np0005531754.novalocal sshd-session[8019]: Connection closed by invalid user debian 103.175.73.3 port 42980 [preauth]
Nov 22 04:53:04 np0005531754.novalocal sshd-session[8021]: Invalid user debian from 103.175.73.3 port 43750
Nov 22 04:53:04 np0005531754.novalocal sshd-session[8021]: Connection closed by invalid user debian 103.175.73.3 port 43750 [preauth]
Nov 22 04:53:05 np0005531754.novalocal sshd-session[8023]: Invalid user debian from 103.175.73.3 port 44324
Nov 22 04:53:05 np0005531754.novalocal sshd-session[8023]: Connection closed by invalid user debian 103.175.73.3 port 44324 [preauth]
Nov 22 04:53:06 np0005531754.novalocal sshd-session[8025]: Invalid user debian from 103.175.73.3 port 44852
Nov 22 04:53:07 np0005531754.novalocal sshd-session[8025]: Connection closed by invalid user debian 103.175.73.3 port 44852 [preauth]
Nov 22 04:53:08 np0005531754.novalocal sshd-session[8027]: Invalid user debian from 103.175.73.3 port 45650
Nov 22 04:53:08 np0005531754.novalocal sshd-session[8027]: Connection closed by invalid user debian 103.175.73.3 port 45650 [preauth]
Nov 22 04:53:09 np0005531754.novalocal sshd-session[8029]: Invalid user debian from 103.175.73.3 port 46436
Nov 22 04:53:09 np0005531754.novalocal sshd-session[8029]: Connection closed by invalid user debian 103.175.73.3 port 46436 [preauth]
Nov 22 04:53:11 np0005531754.novalocal sshd-session[8031]: Invalid user debian from 103.175.73.3 port 47032
Nov 22 04:53:11 np0005531754.novalocal sshd-session[8031]: Connection closed by invalid user debian 103.175.73.3 port 47032 [preauth]
Nov 22 04:53:12 np0005531754.novalocal sshd-session[8033]: Invalid user debian from 103.175.73.3 port 47586
Nov 22 04:53:12 np0005531754.novalocal sshd-session[8033]: Connection closed by invalid user debian 103.175.73.3 port 47586 [preauth]
Nov 22 04:53:13 np0005531754.novalocal sshd-session[8035]: Invalid user debian from 103.175.73.3 port 48278
Nov 22 04:53:14 np0005531754.novalocal sshd-session[8035]: Connection closed by invalid user debian 103.175.73.3 port 48278 [preauth]
Nov 22 04:53:15 np0005531754.novalocal sshd-session[8037]: Invalid user debian from 103.175.73.3 port 49010
Nov 22 04:53:15 np0005531754.novalocal sshd-session[8037]: Connection closed by invalid user debian 103.175.73.3 port 49010 [preauth]
Nov 22 04:53:16 np0005531754.novalocal sshd-session[8039]: Invalid user debian from 103.175.73.3 port 49738
Nov 22 04:53:16 np0005531754.novalocal sshd-session[8039]: Connection closed by invalid user debian 103.175.73.3 port 49738 [preauth]
Nov 22 04:53:17 np0005531754.novalocal sshd-session[8041]: Invalid user debian from 103.175.73.3 port 50338
Nov 22 04:53:18 np0005531754.novalocal sshd-session[8041]: Connection closed by invalid user debian 103.175.73.3 port 50338 [preauth]
Nov 22 04:53:19 np0005531754.novalocal sshd-session[8043]: Invalid user debian from 103.175.73.3 port 50972
Nov 22 04:53:19 np0005531754.novalocal sshd-session[8043]: Connection closed by invalid user debian 103.175.73.3 port 50972 [preauth]
Nov 22 04:53:20 np0005531754.novalocal sshd-session[8045]: Invalid user debian from 103.175.73.3 port 51726
Nov 22 04:53:20 np0005531754.novalocal sshd-session[8045]: Connection closed by invalid user debian 103.175.73.3 port 51726 [preauth]
Nov 22 04:53:22 np0005531754.novalocal sshd-session[8047]: Invalid user debian from 103.175.73.3 port 52418
Nov 22 04:53:22 np0005531754.novalocal sshd-session[8047]: Connection closed by invalid user debian 103.175.73.3 port 52418 [preauth]
Nov 22 04:53:23 np0005531754.novalocal sshd-session[8049]: Invalid user debian from 103.175.73.3 port 52964
Nov 22 04:53:23 np0005531754.novalocal sshd-session[8049]: Connection closed by invalid user debian 103.175.73.3 port 52964 [preauth]
Nov 22 04:53:24 np0005531754.novalocal sshd-session[8051]: Invalid user debian from 103.175.73.3 port 53624
Nov 22 04:53:25 np0005531754.novalocal sshd-session[8051]: Connection closed by invalid user debian 103.175.73.3 port 53624 [preauth]
Nov 22 04:53:26 np0005531754.novalocal sshd-session[8053]: Invalid user debian from 103.175.73.3 port 54324
Nov 22 04:53:26 np0005531754.novalocal sshd-session[8053]: Connection closed by invalid user debian 103.175.73.3 port 54324 [preauth]
Nov 22 04:53:27 np0005531754.novalocal sshd-session[8055]: Invalid user debian from 103.175.73.3 port 55046
Nov 22 04:53:27 np0005531754.novalocal sshd-session[8055]: Connection closed by invalid user debian 103.175.73.3 port 55046 [preauth]
Nov 22 04:53:28 np0005531754.novalocal sshd-session[8058]: Invalid user debian from 103.175.73.3 port 55610
Nov 22 04:53:29 np0005531754.novalocal sshd-session[8058]: Connection closed by invalid user debian 103.175.73.3 port 55610 [preauth]
Nov 22 04:53:30 np0005531754.novalocal sshd-session[8060]: Invalid user debian from 103.175.73.3 port 56364
Nov 22 04:53:30 np0005531754.novalocal sshd-session[8060]: Connection closed by invalid user debian 103.175.73.3 port 56364 [preauth]
Nov 22 04:53:31 np0005531754.novalocal sshd-session[8062]: Invalid user debian from 103.175.73.3 port 57022
Nov 22 04:53:31 np0005531754.novalocal sshd-session[8062]: Connection closed by invalid user debian 103.175.73.3 port 57022 [preauth]
Nov 22 04:53:33 np0005531754.novalocal sshd-session[8064]: Invalid user debian from 103.175.73.3 port 57916
Nov 22 04:53:33 np0005531754.novalocal sshd-session[8064]: Connection closed by invalid user debian 103.175.73.3 port 57916 [preauth]
Nov 22 04:53:34 np0005531754.novalocal sshd-session[8066]: Invalid user debian from 103.175.73.3 port 58410
Nov 22 04:53:34 np0005531754.novalocal sshd-session[8066]: Connection closed by invalid user debian 103.175.73.3 port 58410 [preauth]
Nov 22 04:53:35 np0005531754.novalocal sshd-session[8068]: Invalid user debian from 103.175.73.3 port 59024
Nov 22 04:53:36 np0005531754.novalocal sshd-session[8068]: Connection closed by invalid user debian 103.175.73.3 port 59024 [preauth]
Nov 22 04:53:37 np0005531754.novalocal sshd-session[8070]: Invalid user debian from 103.175.73.3 port 59718
Nov 22 04:53:37 np0005531754.novalocal sshd-session[8070]: Connection closed by invalid user debian 103.175.73.3 port 59718 [preauth]
Nov 22 04:53:38 np0005531754.novalocal sshd-session[8072]: Invalid user debian from 103.175.73.3 port 60440
Nov 22 04:53:38 np0005531754.novalocal sshd-session[8072]: Connection closed by invalid user debian 103.175.73.3 port 60440 [preauth]
Nov 22 04:53:39 np0005531754.novalocal sshd-session[8074]: Invalid user debian from 103.175.73.3 port 32798
Nov 22 04:53:40 np0005531754.novalocal sshd-session[8074]: Connection closed by invalid user debian 103.175.73.3 port 32798 [preauth]
Nov 22 04:53:41 np0005531754.novalocal sshd-session[8076]: Invalid user debian from 103.175.73.3 port 33456
Nov 22 04:53:41 np0005531754.novalocal sshd-session[8076]: Connection closed by invalid user debian 103.175.73.3 port 33456 [preauth]
Nov 22 04:53:42 np0005531754.novalocal sshd-session[8078]: Invalid user debian from 103.175.73.3 port 34212
Nov 22 04:53:42 np0005531754.novalocal sshd-session[8078]: Connection closed by invalid user debian 103.175.73.3 port 34212 [preauth]
Nov 22 04:53:44 np0005531754.novalocal sshd-session[8080]: Invalid user debian from 103.175.73.3 port 35062
Nov 22 04:53:44 np0005531754.novalocal sshd-session[8080]: Connection closed by invalid user debian 103.175.73.3 port 35062 [preauth]
Nov 22 04:53:45 np0005531754.novalocal sshd-session[8082]: Invalid user debian from 103.175.73.3 port 35774
Nov 22 04:53:45 np0005531754.novalocal sshd-session[8082]: Connection closed by invalid user debian 103.175.73.3 port 35774 [preauth]
Nov 22 04:53:46 np0005531754.novalocal sshd-session[8084]: Invalid user debian from 103.175.73.3 port 36178
Nov 22 04:53:47 np0005531754.novalocal sshd-session[8084]: Connection closed by invalid user debian 103.175.73.3 port 36178 [preauth]
Nov 22 04:53:48 np0005531754.novalocal sshd-session[8086]: Invalid user debian from 103.175.73.3 port 36840
Nov 22 04:53:48 np0005531754.novalocal sshd-session[8086]: Connection closed by invalid user debian 103.175.73.3 port 36840 [preauth]
Nov 22 04:53:49 np0005531754.novalocal sshd-session[8088]: Invalid user debian from 103.175.73.3 port 37626
Nov 22 04:53:49 np0005531754.novalocal sshd-session[8088]: Connection closed by invalid user debian 103.175.73.3 port 37626 [preauth]
Nov 22 04:53:50 np0005531754.novalocal sshd-session[8090]: Invalid user debian from 103.175.73.3 port 38220
Nov 22 04:53:51 np0005531754.novalocal sshd-session[8090]: Connection closed by invalid user debian 103.175.73.3 port 38220 [preauth]
Nov 22 04:53:52 np0005531754.novalocal sshd-session[8092]: Invalid user debian from 103.175.73.3 port 38910
Nov 22 04:53:52 np0005531754.novalocal sshd-session[8092]: Connection closed by invalid user debian 103.175.73.3 port 38910 [preauth]
Nov 22 04:53:53 np0005531754.novalocal sshd-session[8094]: Invalid user debian from 103.175.73.3 port 39672
Nov 22 04:53:53 np0005531754.novalocal sshd-session[8094]: Connection closed by invalid user debian 103.175.73.3 port 39672 [preauth]
Nov 22 04:53:54 np0005531754.novalocal sshd-session[8096]: Invalid user debian from 103.175.73.3 port 40356
Nov 22 04:53:55 np0005531754.novalocal sshd-session[8096]: Connection closed by invalid user debian 103.175.73.3 port 40356 [preauth]
Nov 22 04:53:56 np0005531754.novalocal sshd-session[8098]: Invalid user debian from 103.175.73.3 port 41280
Nov 22 04:53:56 np0005531754.novalocal sshd-session[8098]: Connection closed by invalid user debian 103.175.73.3 port 41280 [preauth]
Nov 22 04:53:57 np0005531754.novalocal sshd-session[8100]: Invalid user debian from 103.175.73.3 port 41926
Nov 22 04:53:57 np0005531754.novalocal sshd-session[8100]: Connection closed by invalid user debian 103.175.73.3 port 41926 [preauth]
Nov 22 04:53:59 np0005531754.novalocal sshd-session[8102]: Invalid user debian from 103.175.73.3 port 42310
Nov 22 04:53:59 np0005531754.novalocal sshd-session[8102]: Connection closed by invalid user debian 103.175.73.3 port 42310 [preauth]
Nov 22 04:54:00 np0005531754.novalocal sshd-session[8104]: Invalid user debian from 103.175.73.3 port 43068
Nov 22 04:54:00 np0005531754.novalocal sshd-session[8104]: Connection closed by invalid user debian 103.175.73.3 port 43068 [preauth]
Nov 22 04:54:00 np0005531754.novalocal sshd-session[8106]: Invalid user ubuntu from 80.94.92.166 port 35102
Nov 22 04:54:01 np0005531754.novalocal sshd-session[8106]: Connection closed by invalid user ubuntu 80.94.92.166 port 35102 [preauth]
Nov 22 04:54:01 np0005531754.novalocal sshd-session[8108]: Invalid user debian from 103.175.73.3 port 43826
Nov 22 04:54:02 np0005531754.novalocal sshd-session[8108]: Connection closed by invalid user debian 103.175.73.3 port 43826 [preauth]
Nov 22 04:54:03 np0005531754.novalocal sshd-session[8110]: Invalid user debian from 103.175.73.3 port 44506
Nov 22 04:54:03 np0005531754.novalocal sshd-session[8110]: Connection closed by invalid user debian 103.175.73.3 port 44506 [preauth]
Nov 22 04:54:04 np0005531754.novalocal sshd-session[8112]: Invalid user debian from 103.175.73.3 port 45156
Nov 22 04:54:04 np0005531754.novalocal sshd-session[8112]: Connection closed by invalid user debian 103.175.73.3 port 45156 [preauth]
Nov 22 04:54:05 np0005531754.novalocal sshd-session[8114]: Invalid user debian from 103.175.73.3 port 45914
Nov 22 04:54:06 np0005531754.novalocal sshd-session[8114]: Connection closed by invalid user debian 103.175.73.3 port 45914 [preauth]
Nov 22 04:54:07 np0005531754.novalocal sshd-session[8116]: Invalid user debian from 103.175.73.3 port 46664
Nov 22 04:54:07 np0005531754.novalocal sshd-session[8116]: Connection closed by invalid user debian 103.175.73.3 port 46664 [preauth]
Nov 22 04:54:08 np0005531754.novalocal sshd-session[8118]: Invalid user debian from 103.175.73.3 port 47346
Nov 22 04:54:08 np0005531754.novalocal sshd-session[8118]: Connection closed by invalid user debian 103.175.73.3 port 47346 [preauth]
Nov 22 04:54:10 np0005531754.novalocal sshd-session[8120]: Invalid user debian from 103.175.73.3 port 47782
Nov 22 04:54:10 np0005531754.novalocal sshd-session[8120]: Connection closed by invalid user debian 103.175.73.3 port 47782 [preauth]
Nov 22 04:54:11 np0005531754.novalocal sshd-session[8122]: Invalid user debian from 103.175.73.3 port 48476
Nov 22 04:54:11 np0005531754.novalocal sshd-session[8122]: Connection closed by invalid user debian 103.175.73.3 port 48476 [preauth]
Nov 22 04:54:12 np0005531754.novalocal sshd-session[8124]: Invalid user debian from 103.175.73.3 port 49224
Nov 22 04:54:13 np0005531754.novalocal sshd-session[8124]: Connection closed by invalid user debian 103.175.73.3 port 49224 [preauth]
Nov 22 04:54:14 np0005531754.novalocal sshd-session[8126]: Invalid user debian from 103.175.73.3 port 49830
Nov 22 04:54:14 np0005531754.novalocal sshd-session[8126]: Connection closed by invalid user debian 103.175.73.3 port 49830 [preauth]
Nov 22 04:54:15 np0005531754.novalocal sshd-session[8128]: Invalid user debian from 103.175.73.3 port 50526
Nov 22 04:54:15 np0005531754.novalocal sshd-session[8128]: Connection closed by invalid user debian 103.175.73.3 port 50526 [preauth]
Nov 22 04:54:16 np0005531754.novalocal sshd-session[8130]: Invalid user debian from 103.175.73.3 port 51374
Nov 22 04:54:17 np0005531754.novalocal sshd-session[8130]: Connection closed by invalid user debian 103.175.73.3 port 51374 [preauth]
Nov 22 04:54:18 np0005531754.novalocal sshd-session[8132]: Invalid user admin from 103.175.73.3 port 52088
Nov 22 04:54:18 np0005531754.novalocal sshd-session[8132]: Connection closed by invalid user admin 103.175.73.3 port 52088 [preauth]
Nov 22 04:54:19 np0005531754.novalocal sshd-session[8134]: Invalid user admin from 103.175.73.3 port 53010
Nov 22 04:54:20 np0005531754.novalocal sshd-session[8134]: Connection closed by invalid user admin 103.175.73.3 port 53010 [preauth]
Nov 22 04:54:21 np0005531754.novalocal sshd-session[8136]: Invalid user admin from 103.175.73.3 port 53562
Nov 22 04:54:21 np0005531754.novalocal sshd-session[8136]: Connection closed by invalid user admin 103.175.73.3 port 53562 [preauth]
Nov 22 04:54:22 np0005531754.novalocal sshd-session[8138]: Invalid user admin from 103.175.73.3 port 54096
Nov 22 04:54:22 np0005531754.novalocal sshd-session[8138]: Connection closed by invalid user admin 103.175.73.3 port 54096 [preauth]
Nov 22 04:54:24 np0005531754.novalocal sshd-session[8140]: Invalid user admin from 103.175.73.3 port 54816
Nov 22 04:54:24 np0005531754.novalocal sshd-session[8140]: Connection closed by invalid user admin 103.175.73.3 port 54816 [preauth]
Nov 22 04:54:25 np0005531754.novalocal sshd-session[8142]: Invalid user admin from 103.175.73.3 port 55508
Nov 22 04:54:25 np0005531754.novalocal sshd-session[8142]: Connection closed by invalid user admin 103.175.73.3 port 55508 [preauth]
Nov 22 04:54:26 np0005531754.novalocal sshd-session[8144]: Invalid user admin from 103.175.73.3 port 56172
Nov 22 04:54:27 np0005531754.novalocal sshd-session[8144]: Connection closed by invalid user admin 103.175.73.3 port 56172 [preauth]
Nov 22 04:54:28 np0005531754.novalocal sshd-session[8146]: Invalid user admin from 103.175.73.3 port 56948
Nov 22 04:54:28 np0005531754.novalocal sshd-session[8146]: Connection closed by invalid user admin 103.175.73.3 port 56948 [preauth]
Nov 22 04:54:29 np0005531754.novalocal sshd-session[8148]: Invalid user admin from 103.175.73.3 port 57758
Nov 22 04:54:29 np0005531754.novalocal sshd-session[8148]: Connection closed by invalid user admin 103.175.73.3 port 57758 [preauth]
Nov 22 04:54:30 np0005531754.novalocal sshd-session[8150]: Invalid user admin from 103.175.73.3 port 58402
Nov 22 04:54:31 np0005531754.novalocal sshd-session[8150]: Connection closed by invalid user admin 103.175.73.3 port 58402 [preauth]
Nov 22 04:54:32 np0005531754.novalocal sshd-session[8152]: Invalid user admin from 103.175.73.3 port 59382
Nov 22 04:54:32 np0005531754.novalocal sshd-session[8152]: Connection closed by invalid user admin 103.175.73.3 port 59382 [preauth]
Nov 22 04:54:33 np0005531754.novalocal sshd-session[8154]: Invalid user admin from 103.175.73.3 port 59752
Nov 22 04:54:33 np0005531754.novalocal sshd-session[8154]: Connection closed by invalid user admin 103.175.73.3 port 59752 [preauth]
Nov 22 04:54:34 np0005531754.novalocal sshd-session[8156]: Invalid user admin from 103.175.73.3 port 60370
Nov 22 04:54:35 np0005531754.novalocal sshd-session[8156]: Connection closed by invalid user admin 103.175.73.3 port 60370 [preauth]
Nov 22 04:54:36 np0005531754.novalocal sshd-session[8158]: Invalid user admin from 103.175.73.3 port 32930
Nov 22 04:54:36 np0005531754.novalocal sshd-session[8158]: Connection closed by invalid user admin 103.175.73.3 port 32930 [preauth]
Nov 22 04:54:37 np0005531754.novalocal sshd-session[8160]: Invalid user admin from 103.175.73.3 port 33602
Nov 22 04:54:37 np0005531754.novalocal sshd-session[8160]: Connection closed by invalid user admin 103.175.73.3 port 33602 [preauth]
Nov 22 04:54:39 np0005531754.novalocal sshd-session[8162]: Invalid user admin from 103.175.73.3 port 34394
Nov 22 04:54:39 np0005531754.novalocal sshd-session[8162]: Connection closed by invalid user admin 103.175.73.3 port 34394 [preauth]
Nov 22 04:54:40 np0005531754.novalocal sshd-session[8164]: Invalid user admin from 103.175.73.3 port 35122
Nov 22 04:54:40 np0005531754.novalocal sshd-session[8164]: Connection closed by invalid user admin 103.175.73.3 port 35122 [preauth]
Nov 22 04:54:41 np0005531754.novalocal sshd-session[8166]: Invalid user admin from 103.175.73.3 port 35838
Nov 22 04:54:42 np0005531754.novalocal sshd-session[8166]: Connection closed by invalid user admin 103.175.73.3 port 35838 [preauth]
Nov 22 04:54:43 np0005531754.novalocal sshd-session[8168]: Invalid user admin from 103.175.73.3 port 36520
Nov 22 04:54:43 np0005531754.novalocal sshd-session[8168]: Connection closed by invalid user admin 103.175.73.3 port 36520 [preauth]
Nov 22 04:54:44 np0005531754.novalocal sshd-session[8170]: Invalid user admin from 103.175.73.3 port 37160
Nov 22 04:54:44 np0005531754.novalocal sshd-session[8170]: Connection closed by invalid user admin 103.175.73.3 port 37160 [preauth]
Nov 22 04:54:46 np0005531754.novalocal sshd-session[8173]: Invalid user admin from 103.175.73.3 port 37734
Nov 22 04:54:46 np0005531754.novalocal sshd-session[8173]: Connection closed by invalid user admin 103.175.73.3 port 37734 [preauth]
Nov 22 04:54:47 np0005531754.novalocal sshd-session[8175]: Invalid user admin from 103.175.73.3 port 38420
Nov 22 04:54:47 np0005531754.novalocal sshd-session[8175]: Connection closed by invalid user admin 103.175.73.3 port 38420 [preauth]
Nov 22 04:54:48 np0005531754.novalocal sshd-session[8177]: Invalid user admin from 103.175.73.3 port 39128
Nov 22 04:54:49 np0005531754.novalocal sshd-session[8177]: Connection closed by invalid user admin 103.175.73.3 port 39128 [preauth]
Nov 22 04:54:50 np0005531754.novalocal sshd-session[8179]: Invalid user admin from 103.175.73.3 port 40020
Nov 22 04:54:50 np0005531754.novalocal sshd-session[8179]: Connection closed by invalid user admin 103.175.73.3 port 40020 [preauth]
Nov 22 04:54:51 np0005531754.novalocal sshd-session[8181]: Invalid user admin from 103.175.73.3 port 40790
Nov 22 04:54:51 np0005531754.novalocal sshd-session[8181]: Connection closed by invalid user admin 103.175.73.3 port 40790 [preauth]
Nov 22 04:54:52 np0005531754.novalocal sshd-session[8183]: Invalid user admin from 103.175.73.3 port 41330
Nov 22 04:54:53 np0005531754.novalocal sshd-session[8183]: Connection closed by invalid user admin 103.175.73.3 port 41330 [preauth]
Nov 22 04:54:54 np0005531754.novalocal sshd-session[8185]: Invalid user admin from 103.175.73.3 port 42194
Nov 22 04:54:54 np0005531754.novalocal sshd-session[8185]: Connection closed by invalid user admin 103.175.73.3 port 42194 [preauth]
Nov 22 04:54:55 np0005531754.novalocal sshd-session[8187]: Invalid user admin from 103.175.73.3 port 42916
Nov 22 04:54:55 np0005531754.novalocal sshd-session[8187]: Connection closed by invalid user admin 103.175.73.3 port 42916 [preauth]
Nov 22 04:54:57 np0005531754.novalocal sshd-session[8189]: Invalid user admin from 103.175.73.3 port 43428
Nov 22 04:54:57 np0005531754.novalocal sshd-session[8189]: Connection closed by invalid user admin 103.175.73.3 port 43428 [preauth]
Nov 22 04:54:58 np0005531754.novalocal sshd-session[8191]: Invalid user admin from 103.175.73.3 port 44082
Nov 22 04:54:58 np0005531754.novalocal sshd-session[8191]: Connection closed by invalid user admin 103.175.73.3 port 44082 [preauth]
Nov 22 04:54:59 np0005531754.novalocal sshd-session[8193]: Invalid user admin from 103.175.73.3 port 44800
Nov 22 04:55:00 np0005531754.novalocal sshd-session[8193]: Connection closed by invalid user admin 103.175.73.3 port 44800 [preauth]
Nov 22 04:55:01 np0005531754.novalocal sshd-session[8195]: Invalid user admin from 103.175.73.3 port 45548
Nov 22 04:55:01 np0005531754.novalocal sshd-session[8195]: Connection closed by invalid user admin 103.175.73.3 port 45548 [preauth]
Nov 22 04:55:02 np0005531754.novalocal sshd-session[8200]: Accepted publickey for zuul from 38.102.83.114 port 33096 ssh2: RSA SHA256:723CLJgh9jzg+4Vfbb+tCqWKZy25P9e6Oul69vFbpik
Nov 22 04:55:02 np0005531754.novalocal systemd-logind[798]: New session 3 of user zuul.
Nov 22 04:55:02 np0005531754.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 22 04:55:02 np0005531754.novalocal sshd-session[8200]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:55:02 np0005531754.novalocal sudo[8227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzwmwcplfjprtzahhfhqophazyqccdnv ; /usr/bin/python3'
Nov 22 04:55:02 np0005531754.novalocal sudo[8227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:02 np0005531754.novalocal sshd-session[8197]: Invalid user admin from 103.175.73.3 port 46352
Nov 22 04:55:02 np0005531754.novalocal python3[8229]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-409c-f072-000000001cc6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:02 np0005531754.novalocal sudo[8227]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:02 np0005531754.novalocal sudo[8255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfeofktbeeijdlkmsbleswmufdlingfe ; /usr/bin/python3'
Nov 22 04:55:02 np0005531754.novalocal sshd-session[8197]: Connection closed by invalid user admin 103.175.73.3 port 46352 [preauth]
Nov 22 04:55:02 np0005531754.novalocal sudo[8255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:02 np0005531754.novalocal python3[8257]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:02 np0005531754.novalocal sudo[8255]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:03 np0005531754.novalocal sudo[8281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttivjgbgzdosfwcullgdjiknibjnttre ; /usr/bin/python3'
Nov 22 04:55:03 np0005531754.novalocal sudo[8281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:03 np0005531754.novalocal python3[8284]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:03 np0005531754.novalocal sudo[8281]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:03 np0005531754.novalocal sudo[8310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnodhdkmkiugvyazmhdypxsibtinzed ; /usr/bin/python3'
Nov 22 04:55:03 np0005531754.novalocal sudo[8310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:03 np0005531754.novalocal python3[8312]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:03 np0005531754.novalocal sudo[8310]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:03 np0005531754.novalocal sudo[8336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqjprversmagoqkjgjlhycntcvswsezb ; /usr/bin/python3'
Nov 22 04:55:03 np0005531754.novalocal sudo[8336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:03 np0005531754.novalocal python3[8338]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:03 np0005531754.novalocal sudo[8336]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:03 np0005531754.novalocal sshd-session[8283]: Invalid user admin from 103.175.73.3 port 47002
Nov 22 04:55:04 np0005531754.novalocal sudo[8362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anopebghtvctovauwsyxfpsfqpmcoeaj ; /usr/bin/python3'
Nov 22 04:55:04 np0005531754.novalocal sudo[8362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:04 np0005531754.novalocal sshd-session[8283]: Connection closed by invalid user admin 103.175.73.3 port 47002 [preauth]
Nov 22 04:55:04 np0005531754.novalocal python3[8364]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:04 np0005531754.novalocal sudo[8362]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:04 np0005531754.novalocal sudo[8442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrhahidknvvceudejfmcbqkxjzeqajfg ; /usr/bin/python3'
Nov 22 04:55:04 np0005531754.novalocal sudo[8442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:04 np0005531754.novalocal python3[8444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:55:04 np0005531754.novalocal sudo[8442]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:05 np0005531754.novalocal sudo[8515]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbtsozidxacteemddkdzwxfamxyjjeos ; /usr/bin/python3'
Nov 22 04:55:05 np0005531754.novalocal sudo[8515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:05 np0005531754.novalocal python3[8517]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763787304.4416366-476-246523104748827/source _original_basename=tmpy7t_n1x5 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:55:05 np0005531754.novalocal sudo[8515]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:05 np0005531754.novalocal sshd-session[8380]: Invalid user admin from 103.175.73.3 port 47764
Nov 22 04:55:05 np0005531754.novalocal sshd-session[8380]: Connection closed by invalid user admin 103.175.73.3 port 47764 [preauth]
Nov 22 04:55:05 np0005531754.novalocal sudo[8565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgjtsdwnmpjbnwxohspfqhhvsnrpvyfn ; /usr/bin/python3'
Nov 22 04:55:05 np0005531754.novalocal sudo[8565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:06 np0005531754.novalocal python3[8567]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 04:55:06 np0005531754.novalocal systemd[1]: Reloading.
Nov 22 04:55:06 np0005531754.novalocal systemd-rc-local-generator[8591]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 04:55:06 np0005531754.novalocal sudo[8565]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:06 np0005531754.novalocal sshd-session[8568]: Invalid user admin from 103.175.73.3 port 48660
Nov 22 04:55:06 np0005531754.novalocal sshd-session[8568]: Connection closed by invalid user admin 103.175.73.3 port 48660 [preauth]
Nov 22 04:55:07 np0005531754.novalocal sudo[8625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddpdvfqfnnpfaecgvlygbncsjpemfju ; /usr/bin/python3'
Nov 22 04:55:07 np0005531754.novalocal sudo[8625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:07 np0005531754.novalocal python3[8627]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 22 04:55:07 np0005531754.novalocal sudo[8625]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:07 np0005531754.novalocal sudo[8651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgxvgjwdjzkqmlrrvfnogogtrknghgs ; /usr/bin/python3'
Nov 22 04:55:07 np0005531754.novalocal sudo[8651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:07 np0005531754.novalocal python3[8653]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:08 np0005531754.novalocal sudo[8651]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:08 np0005531754.novalocal sshd-session[8600]: Invalid user admin from 103.175.73.3 port 49128
Nov 22 04:55:08 np0005531754.novalocal sudo[8679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblmtmtebdejnwxdjrsatyyipialtcci ; /usr/bin/python3'
Nov 22 04:55:08 np0005531754.novalocal sudo[8679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:08 np0005531754.novalocal sshd-session[8600]: Connection closed by invalid user admin 103.175.73.3 port 49128 [preauth]
Nov 22 04:55:08 np0005531754.novalocal python3[8681]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:08 np0005531754.novalocal sudo[8679]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:08 np0005531754.novalocal sudo[8707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoecemzopqqngzqpmachefitsjadekic ; /usr/bin/python3'
Nov 22 04:55:08 np0005531754.novalocal sudo[8707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:08 np0005531754.novalocal python3[8709]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:08 np0005531754.novalocal sudo[8707]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:08 np0005531754.novalocal sudo[8737]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzklcirbkzmgtpqvxsxiualsrugpeijz ; /usr/bin/python3'
Nov 22 04:55:08 np0005531754.novalocal sudo[8737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:08 np0005531754.novalocal python3[8739]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:08 np0005531754.novalocal sudo[8737]: pam_unix(sudo:session): session closed for user root
Nov 22 04:55:09 np0005531754.novalocal python3[8766]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-409c-f072-000000001ccd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:55:09 np0005531754.novalocal sshd-session[8710]: Invalid user admin from 103.175.73.3 port 49784
Nov 22 04:55:09 np0005531754.novalocal sshd-session[8710]: Connection closed by invalid user admin 103.175.73.3 port 49784 [preauth]
Nov 22 04:55:09 np0005531754.novalocal python3[8796]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 04:55:10 np0005531754.novalocal sshd-session[8797]: Invalid user admin from 103.175.73.3 port 50550
Nov 22 04:55:11 np0005531754.novalocal sshd-session[8797]: Connection closed by invalid user admin 103.175.73.3 port 50550 [preauth]
Nov 22 04:55:11 np0005531754.novalocal sshd-session[8203]: Connection closed by 38.102.83.114 port 33096
Nov 22 04:55:11 np0005531754.novalocal sshd-session[8200]: pam_unix(sshd:session): session closed for user zuul
Nov 22 04:55:11 np0005531754.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 22 04:55:11 np0005531754.novalocal systemd[1]: session-3.scope: Consumed 4.328s CPU time.
Nov 22 04:55:11 np0005531754.novalocal systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Nov 22 04:55:11 np0005531754.novalocal systemd-logind[798]: Removed session 3.
Nov 22 04:55:12 np0005531754.novalocal sshd-session[8801]: Invalid user admin from 103.175.73.3 port 51200
Nov 22 04:55:12 np0005531754.novalocal sshd-session[8801]: Connection closed by invalid user admin 103.175.73.3 port 51200 [preauth]
Nov 22 04:55:13 np0005531754.novalocal sshd-session[8807]: Accepted publickey for zuul from 38.102.83.114 port 56218 ssh2: RSA SHA256:723CLJgh9jzg+4Vfbb+tCqWKZy25P9e6Oul69vFbpik
Nov 22 04:55:13 np0005531754.novalocal systemd-logind[798]: New session 4 of user zuul.
Nov 22 04:55:13 np0005531754.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 22 04:55:13 np0005531754.novalocal sshd-session[8807]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:55:13 np0005531754.novalocal sudo[8834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gssztwztyckdnkwvalzaqkmthbuextcd ; /usr/bin/python3'
Nov 22 04:55:13 np0005531754.novalocal sudo[8834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:55:13 np0005531754.novalocal sshd-session[8805]: Invalid user admin from 103.175.73.3 port 51950
Nov 22 04:55:13 np0005531754.novalocal python3[8836]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 04:55:13 np0005531754.novalocal sshd-session[8805]: Connection closed by invalid user admin 103.175.73.3 port 51950 [preauth]
Nov 22 04:55:15 np0005531754.novalocal sshd-session[8838]: Invalid user admin from 103.175.73.3 port 52772
Nov 22 04:55:15 np0005531754.novalocal sshd-session[8838]: Connection closed by invalid user admin 103.175.73.3 port 52772 [preauth]
Nov 22 04:55:16 np0005531754.novalocal sshd-session[8840]: Invalid user admin from 103.175.73.3 port 53490
Nov 22 04:55:16 np0005531754.novalocal sshd-session[8840]: Connection closed by invalid user admin 103.175.73.3 port 53490 [preauth]
Nov 22 04:55:17 np0005531754.novalocal sshd-session[8847]: Invalid user admin from 103.175.73.3 port 54354
Nov 22 04:55:18 np0005531754.novalocal sshd-session[8847]: Connection closed by invalid user admin 103.175.73.3 port 54354 [preauth]
Nov 22 04:55:19 np0005531754.novalocal sshd-session[8852]: Invalid user admin from 103.175.73.3 port 54902
Nov 22 04:55:19 np0005531754.novalocal sshd-session[8852]: Connection closed by invalid user admin 103.175.73.3 port 54902 [preauth]
Nov 22 04:55:20 np0005531754.novalocal sshd-session[8858]: Invalid user admin from 103.175.73.3 port 55416
Nov 22 04:55:20 np0005531754.novalocal sshd-session[8858]: Connection closed by invalid user admin 103.175.73.3 port 55416 [preauth]
Nov 22 04:55:22 np0005531754.novalocal sshd-session[8860]: Invalid user admin from 103.175.73.3 port 56282
Nov 22 04:55:22 np0005531754.novalocal sshd-session[8860]: Connection closed by invalid user admin 103.175.73.3 port 56282 [preauth]
Nov 22 04:55:23 np0005531754.novalocal sshd-session[8886]: Invalid user admin from 103.175.73.3 port 56946
Nov 22 04:55:23 np0005531754.novalocal irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 22 04:55:23 np0005531754.novalocal irqbalance[791]: IRQ 27 affinity is now unmanaged
Nov 22 04:55:23 np0005531754.novalocal sshd-session[8886]: Connection closed by invalid user admin 103.175.73.3 port 56946 [preauth]
Nov 22 04:55:24 np0005531754.novalocal sshd-session[8893]: Invalid user admin from 103.175.73.3 port 57696
Nov 22 04:55:25 np0005531754.novalocal sshd-session[8893]: Connection closed by invalid user admin 103.175.73.3 port 57696 [preauth]
Nov 22 04:55:26 np0005531754.novalocal sshd-session[8896]: Invalid user admin from 103.175.73.3 port 58582
Nov 22 04:55:26 np0005531754.novalocal sshd-session[8896]: Connection closed by invalid user admin 103.175.73.3 port 58582 [preauth]
Nov 22 04:55:27 np0005531754.novalocal sshd-session[8898]: Invalid user admin from 103.175.73.3 port 59248
Nov 22 04:55:27 np0005531754.novalocal sshd-session[8898]: Connection closed by invalid user admin 103.175.73.3 port 59248 [preauth]
Nov 22 04:55:29 np0005531754.novalocal sshd-session[8900]: Invalid user admin from 103.175.73.3 port 60032
Nov 22 04:55:29 np0005531754.novalocal sshd-session[8900]: Connection closed by invalid user admin 103.175.73.3 port 60032 [preauth]
Nov 22 04:55:30 np0005531754.novalocal sshd-session[8902]: Invalid user admin from 103.175.73.3 port 60836
Nov 22 04:55:30 np0005531754.novalocal sshd-session[8902]: Connection closed by invalid user admin 103.175.73.3 port 60836 [preauth]
Nov 22 04:55:31 np0005531754.novalocal sshd-session[8904]: Invalid user admin from 103.175.73.3 port 33012
Nov 22 04:55:31 np0005531754.novalocal sshd-session[8904]: Connection closed by invalid user admin 103.175.73.3 port 33012 [preauth]
Nov 22 04:55:33 np0005531754.novalocal sshd-session[8906]: Invalid user admin from 103.175.73.3 port 33700
Nov 22 04:55:33 np0005531754.novalocal sshd-session[8906]: Connection closed by invalid user admin 103.175.73.3 port 33700 [preauth]
Nov 22 04:55:34 np0005531754.novalocal sshd-session[8911]: Invalid user admin from 103.175.73.3 port 34606
Nov 22 04:55:34 np0005531754.novalocal sshd-session[8911]: Connection closed by invalid user admin 103.175.73.3 port 34606 [preauth]
Nov 22 04:55:35 np0005531754.novalocal sshd-session[8913]: Invalid user admin from 103.175.73.3 port 35240
Nov 22 04:55:36 np0005531754.novalocal sshd-session[8913]: Connection closed by invalid user admin 103.175.73.3 port 35240 [preauth]
Nov 22 04:55:37 np0005531754.novalocal sshd-session[8915]: Invalid user admin from 103.175.73.3 port 35888
Nov 22 04:55:37 np0005531754.novalocal sshd-session[8915]: Connection closed by invalid user admin 103.175.73.3 port 35888 [preauth]
Nov 22 04:55:38 np0005531754.novalocal sshd-session[8917]: Invalid user admin from 103.175.73.3 port 36718
Nov 22 04:55:38 np0005531754.novalocal sshd-session[8917]: Connection closed by invalid user admin 103.175.73.3 port 36718 [preauth]
Nov 22 04:55:39 np0005531754.novalocal sshd-session[8919]: Invalid user admin from 103.175.73.3 port 37490
Nov 22 04:55:40 np0005531754.novalocal sshd-session[8919]: Connection closed by invalid user admin 103.175.73.3 port 37490 [preauth]
Nov 22 04:55:41 np0005531754.novalocal sshd-session[8921]: Invalid user admin from 103.175.73.3 port 38138
Nov 22 04:55:41 np0005531754.novalocal sshd-session[8921]: Connection closed by invalid user admin 103.175.73.3 port 38138 [preauth]
Nov 22 04:55:42 np0005531754.novalocal sshd-session[8923]: Invalid user admin from 103.175.73.3 port 38868
Nov 22 04:55:42 np0005531754.novalocal sshd-session[8923]: Connection closed by invalid user admin 103.175.73.3 port 38868 [preauth]
Nov 22 04:55:44 np0005531754.novalocal sshd-session[8925]: Invalid user admin from 103.175.73.3 port 39522
Nov 22 04:55:44 np0005531754.novalocal sshd-session[8925]: Connection closed by invalid user admin 103.175.73.3 port 39522 [preauth]
Nov 22 04:55:45 np0005531754.novalocal sshd-session[8927]: Invalid user admin from 103.175.73.3 port 40334
Nov 22 04:55:45 np0005531754.novalocal sshd-session[8927]: Connection closed by invalid user admin 103.175.73.3 port 40334 [preauth]
Nov 22 04:55:46 np0005531754.novalocal sshd-session[8929]: Invalid user admin from 103.175.73.3 port 40996
Nov 22 04:55:47 np0005531754.novalocal sshd-session[8929]: Connection closed by invalid user admin 103.175.73.3 port 40996 [preauth]
Nov 22 04:55:48 np0005531754.novalocal sshd-session[8931]: Invalid user admin from 103.175.73.3 port 41684
Nov 22 04:55:48 np0005531754.novalocal sshd-session[8931]: Connection closed by invalid user admin 103.175.73.3 port 41684 [preauth]
Nov 22 04:55:49 np0005531754.novalocal sshd-session[8933]: Invalid user admin from 103.175.73.3 port 42526
Nov 22 04:55:49 np0005531754.novalocal sshd-session[8933]: Connection closed by invalid user admin 103.175.73.3 port 42526 [preauth]
Nov 22 04:55:51 np0005531754.novalocal sshd-session[8935]: Invalid user admin from 103.175.73.3 port 43290
Nov 22 04:55:51 np0005531754.novalocal sshd-session[8935]: Connection closed by invalid user admin 103.175.73.3 port 43290 [preauth]
Nov 22 04:55:53 np0005531754.novalocal sshd-session[8938]: Invalid user admin from 103.175.73.3 port 44354
Nov 22 04:55:53 np0005531754.novalocal sshd-session[8938]: Connection closed by invalid user admin 103.175.73.3 port 44354 [preauth]
Nov 22 04:55:54 np0005531754.novalocal sshd-session[8940]: Invalid user admin from 103.175.73.3 port 44828
Nov 22 04:55:54 np0005531754.novalocal sshd-session[8940]: Connection closed by invalid user admin 103.175.73.3 port 44828 [preauth]
Nov 22 04:55:55 np0005531754.novalocal sshd-session[8942]: Invalid user admin from 103.175.73.3 port 45618
Nov 22 04:55:55 np0005531754.novalocal sshd-session[8942]: Connection closed by invalid user admin 103.175.73.3 port 45618 [preauth]
Nov 22 04:55:57 np0005531754.novalocal sshd-session[8944]: Invalid user admin from 103.175.73.3 port 46444
Nov 22 04:55:57 np0005531754.novalocal sshd-session[8944]: Connection closed by invalid user admin 103.175.73.3 port 46444 [preauth]
Nov 22 04:55:58 np0005531754.novalocal sshd-session[8949]: Invalid user admin from 103.175.73.3 port 46986
Nov 22 04:55:58 np0005531754.novalocal sshd-session[8949]: Connection closed by invalid user admin 103.175.73.3 port 46986 [preauth]
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 04:55:59 np0005531754.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 04:55:59 np0005531754.novalocal sshd-session[8951]: Invalid user admin from 103.175.73.3 port 47670
Nov 22 04:56:00 np0005531754.novalocal sshd-session[8951]: Connection closed by invalid user admin 103.175.73.3 port 47670 [preauth]
Nov 22 04:56:01 np0005531754.novalocal sshd-session[8956]: Invalid user admin from 103.175.73.3 port 48414
Nov 22 04:56:01 np0005531754.novalocal sshd-session[8956]: Connection closed by invalid user admin 103.175.73.3 port 48414 [preauth]
Nov 22 04:56:02 np0005531754.novalocal sshd-session[8959]: Invalid user admin from 103.175.73.3 port 49374
Nov 22 04:56:02 np0005531754.novalocal sshd-session[8959]: Connection closed by invalid user admin 103.175.73.3 port 49374 [preauth]
Nov 22 04:56:03 np0005531754.novalocal sshd-session[8961]: Invalid user admin from 103.175.73.3 port 50092
Nov 22 04:56:04 np0005531754.novalocal sshd-session[8961]: Connection closed by invalid user admin 103.175.73.3 port 50092 [preauth]
Nov 22 04:56:05 np0005531754.novalocal sshd-session[8963]: Invalid user admin from 103.175.73.3 port 50604
Nov 22 04:56:05 np0005531754.novalocal sshd-session[8963]: Connection closed by invalid user admin 103.175.73.3 port 50604 [preauth]
Nov 22 04:56:06 np0005531754.novalocal sshd-session[8965]: Invalid user admin from 103.175.73.3 port 51160
Nov 22 04:56:06 np0005531754.novalocal sshd-session[8965]: Connection closed by invalid user admin 103.175.73.3 port 51160 [preauth]
Nov 22 04:56:08 np0005531754.novalocal sshd-session[8967]: Invalid user admin from 103.175.73.3 port 52094
Nov 22 04:56:08 np0005531754.novalocal sshd-session[8967]: Connection closed by invalid user admin 103.175.73.3 port 52094 [preauth]
Nov 22 04:56:09 np0005531754.novalocal sshd-session[8972]: Invalid user orangepi from 123.253.22.30 port 36160
Nov 22 04:56:09 np0005531754.novalocal sshd-session[8974]: Invalid user admin from 103.175.73.3 port 52738
Nov 22 04:56:09 np0005531754.novalocal sshd-session[8972]: Connection closed by invalid user orangepi 123.253.22.30 port 36160 [preauth]
Nov 22 04:56:09 np0005531754.novalocal sshd-session[8974]: Connection closed by invalid user admin 103.175.73.3 port 52738 [preauth]
Nov 22 04:56:10 np0005531754.novalocal sshd-session[8976]: Invalid user admin from 103.175.73.3 port 53350
Nov 22 04:56:11 np0005531754.novalocal sshd-session[8976]: Connection closed by invalid user admin 103.175.73.3 port 53350 [preauth]
Nov 22 04:56:12 np0005531754.novalocal sshd-session[8978]: Invalid user pi from 103.175.73.3 port 54066
Nov 22 04:56:12 np0005531754.novalocal sshd-session[8978]: Connection closed by invalid user pi 103.175.73.3 port 54066 [preauth]
Nov 22 04:56:13 np0005531754.novalocal sshd-session[8980]: Connection closed by authenticating user ftp 103.175.73.3 port 54814 [preauth]
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 04:56:18 np0005531754.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 04:56:27 np0005531754.novalocal sshd-session[8988]: Invalid user sol from 80.94.92.166 port 37694
Nov 22 04:56:27 np0005531754.novalocal sshd-session[8988]: Connection closed by invalid user sol 80.94.92.166 port 37694 [preauth]
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 04:56:29 np0005531754.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 04:56:33 np0005531754.novalocal setsebool[8997]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 22 04:56:33 np0005531754.novalocal setsebool[8997]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 04:56:48 np0005531754.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 04:57:26 np0005531754.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 04:57:26 np0005531754.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 04:57:26 np0005531754.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 22 04:57:26 np0005531754.novalocal systemd[1]: Reloading.
Nov 22 04:57:26 np0005531754.novalocal systemd-rc-local-generator[9754]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 04:57:26 np0005531754.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 04:57:28 np0005531754.novalocal sudo[8834]: pam_unix(sudo:session): session closed for user root
Nov 22 04:57:29 np0005531754.novalocal python3[10721]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-6bc6-f53b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 04:57:30 np0005531754.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Nov 22 04:57:30 np0005531754.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 22 04:57:30 np0005531754.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Nov 22 04:57:30 np0005531754.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 22 04:57:30 np0005531754.novalocal kernel: evm: overlay not supported
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: Starting D-Bus User Message Bus...
Nov 22 04:57:30 np0005531754.novalocal dbus-broker-launch[11661]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 22 04:57:30 np0005531754.novalocal dbus-broker-launch[11661]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: Started D-Bus User Message Bus.
Nov 22 04:57:30 np0005531754.novalocal dbus-broker-lau[11661]: Ready
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: Created slice Slice /user.
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: podman-11503.scope: unit configures an IP firewall, but not running as root.
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: (This warning is only shown for the first unit using IP firewalling.)
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: Started podman-11503.scope.
Nov 22 04:57:30 np0005531754.novalocal systemd[4302]: Started podman-pause-a028a28b.scope.
Nov 22 04:57:31 np0005531754.novalocal sudo[12509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmujifttnwvbfldraytxjefudmsfcvf ; /usr/bin/python3'
Nov 22 04:57:31 np0005531754.novalocal sudo[12509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:57:31 np0005531754.novalocal python3[12538]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.5:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.5:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:57:31 np0005531754.novalocal python3[12538]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 22 04:57:31 np0005531754.novalocal sudo[12509]: pam_unix(sudo:session): session closed for user root
Nov 22 04:57:31 np0005531754.novalocal sshd-session[8810]: Connection closed by 38.102.83.114 port 56218
Nov 22 04:57:31 np0005531754.novalocal sshd-session[8807]: pam_unix(sshd:session): session closed for user zuul
Nov 22 04:57:31 np0005531754.novalocal systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Nov 22 04:57:31 np0005531754.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 22 04:57:31 np0005531754.novalocal systemd[1]: session-4.scope: Consumed 1min 2.227s CPU time.
Nov 22 04:57:31 np0005531754.novalocal systemd-logind[798]: Removed session 4.
Nov 22 04:57:51 np0005531754.novalocal sshd-session[18501]: Connection closed by 38.102.83.69 port 56440 [preauth]
Nov 22 04:57:51 np0005531754.novalocal sshd-session[18500]: Connection closed by 38.102.83.69 port 56424 [preauth]
Nov 22 04:57:51 np0005531754.novalocal sshd-session[18507]: Unable to negotiate with 38.102.83.69 port 56454: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 04:57:51 np0005531754.novalocal sshd-session[18504]: Unable to negotiate with 38.102.83.69 port 56472: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 04:57:51 np0005531754.novalocal sshd-session[18506]: Unable to negotiate with 38.102.83.69 port 56470: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 04:57:57 np0005531754.novalocal sshd-session[19724]: Accepted publickey for zuul from 38.102.83.114 port 57930 ssh2: RSA SHA256:723CLJgh9jzg+4Vfbb+tCqWKZy25P9e6Oul69vFbpik
Nov 22 04:57:57 np0005531754.novalocal systemd-logind[798]: New session 5 of user zuul.
Nov 22 04:57:57 np0005531754.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 22 04:57:57 np0005531754.novalocal sshd-session[19724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:57:57 np0005531754.novalocal python3[19751]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:57:58 np0005531754.novalocal sudo[20015]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hydxsgkbqcilpytngzcbrydicirhqysx ; /usr/bin/python3'
Nov 22 04:57:58 np0005531754.novalocal sudo[20015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:57:58 np0005531754.novalocal python3[20028]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:57:58 np0005531754.novalocal sudo[20015]: pam_unix(sudo:session): session closed for user root
Nov 22 04:57:58 np0005531754.novalocal sudo[20287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovzdgjnigdykrfqzklrupyziksrpouas ; /usr/bin/python3'
Nov 22 04:57:58 np0005531754.novalocal sudo[20287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:57:59 np0005531754.novalocal python3[20296]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005531754.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 22 04:57:59 np0005531754.novalocal useradd[20366]: new group: name=cloud-admin, GID=1002
Nov 22 04:57:59 np0005531754.novalocal useradd[20366]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 22 04:57:59 np0005531754.novalocal sudo[20287]: pam_unix(sudo:session): session closed for user root
Nov 22 04:57:59 np0005531754.novalocal sudo[20491]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akrvwqcysddqjtpwrsmxijvofbbhczyu ; /usr/bin/python3'
Nov 22 04:57:59 np0005531754.novalocal sudo[20491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:57:59 np0005531754.novalocal python3[20502]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2fCE4pM8UnUYabfT0MrGb94mFe26roqFQocYTDfQoYQ5AhY8f0UWitD/DgJ5xva2SW8YAkkt+bLqpMZbbNriA= zuul@np0005531753.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 04:57:59 np0005531754.novalocal sudo[20491]: pam_unix(sudo:session): session closed for user root
Nov 22 04:58:00 np0005531754.novalocal sudo[20733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwsbvgybleeqvvhwaeuhzpykuxzliomz ; /usr/bin/python3'
Nov 22 04:58:00 np0005531754.novalocal sudo[20733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:58:00 np0005531754.novalocal python3[20746]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 04:58:00 np0005531754.novalocal sudo[20733]: pam_unix(sudo:session): session closed for user root
Nov 22 04:58:00 np0005531754.novalocal sudo[20977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhuxbmgxvbcmqowlqvzcssedsrnnsyee ; /usr/bin/python3'
Nov 22 04:58:00 np0005531754.novalocal sudo[20977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:58:00 np0005531754.novalocal python3[20984]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763787479.8339-135-174447271139055/source _original_basename=tmpzd1wl7fv follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 04:58:00 np0005531754.novalocal sudo[20977]: pam_unix(sudo:session): session closed for user root
Nov 22 04:58:01 np0005531754.novalocal sudo[21285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbizgyjikcfkftcbcxsbxeyyhlahzdcn ; /usr/bin/python3'
Nov 22 04:58:01 np0005531754.novalocal sudo[21285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:58:01 np0005531754.novalocal python3[21296]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 22 04:58:01 np0005531754.novalocal systemd[1]: Starting Hostname Service...
Nov 22 04:58:01 np0005531754.novalocal systemd[1]: Started Hostname Service.
Nov 22 04:58:01 np0005531754.novalocal systemd-hostnamed[21396]: Changed pretty hostname to 'compute-0'
Nov 22 04:58:01 compute-0 systemd-hostnamed[21396]: Hostname set to <compute-0> (static)
Nov 22 04:58:01 compute-0 NetworkManager[7192]: <info>  [1763787481.7307] hostname: static hostname changed from "np0005531754.novalocal" to "compute-0"
Nov 22 04:58:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 04:58:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 04:58:01 compute-0 sudo[21285]: pam_unix(sudo:session): session closed for user root
Nov 22 04:58:02 compute-0 sshd-session[19727]: Connection closed by 38.102.83.114 port 57930
Nov 22 04:58:02 compute-0 sshd-session[19724]: pam_unix(sshd:session): session closed for user zuul
Nov 22 04:58:02 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 22 04:58:02 compute-0 systemd[1]: session-5.scope: Consumed 2.720s CPU time.
Nov 22 04:58:02 compute-0 systemd-logind[798]: Session 5 logged out. Waiting for processes to exit.
Nov 22 04:58:02 compute-0 systemd-logind[798]: Removed session 5.
Nov 22 04:58:11 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 04:58:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 04:58:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 04:58:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 3.329s CPU time.
Nov 22 04:58:30 compute-0 systemd[1]: run-r082c3a959f1c4c8180532759aca638a0.service: Deactivated successfully.
Nov 22 04:58:31 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 04:58:55 compute-0 sshd-session[30754]: Invalid user solana from 80.94.92.166 port 40292
Nov 22 04:58:55 compute-0 sshd-session[30754]: Connection closed by invalid user solana 80.94.92.166 port 40292 [preauth]
Nov 22 05:01:01 compute-0 CROND[30760]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 05:01:01 compute-0 run-parts[30763]: (/etc/cron.hourly) starting 0anacron
Nov 22 05:01:01 compute-0 anacron[30771]: Anacron started on 2025-11-22
Nov 22 05:01:01 compute-0 anacron[30771]: Will run job `cron.daily' in 6 min.
Nov 22 05:01:01 compute-0 anacron[30771]: Will run job `cron.weekly' in 26 min.
Nov 22 05:01:01 compute-0 anacron[30771]: Will run job `cron.monthly' in 46 min.
Nov 22 05:01:01 compute-0 anacron[30771]: Jobs will be executed sequentially
Nov 22 05:01:01 compute-0 run-parts[30773]: (/etc/cron.hourly) finished 0anacron
Nov 22 05:01:01 compute-0 CROND[30759]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 05:01:14 compute-0 sshd-session[30774]: Invalid user validator from 80.94.92.166 port 42920
Nov 22 05:01:14 compute-0 sshd-session[30774]: Connection closed by invalid user validator 80.94.92.166 port 42920 [preauth]
Nov 22 05:02:16 compute-0 sshd-session[30776]: Connection closed by authenticating user root 123.253.22.30 port 57082 [preauth]
Nov 22 05:02:46 compute-0 sshd-session[30779]: Accepted publickey for zuul from 38.102.83.69 port 56510 ssh2: RSA SHA256:723CLJgh9jzg+4Vfbb+tCqWKZy25P9e6Oul69vFbpik
Nov 22 05:02:46 compute-0 systemd-logind[798]: New session 6 of user zuul.
Nov 22 05:02:46 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 22 05:02:46 compute-0 sshd-session[30779]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:02:47 compute-0 python3[30855]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:02:48 compute-0 sudo[30969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psetplyyeulmwmdiljigjahocuzmhwbq ; /usr/bin/python3'
Nov 22 05:02:48 compute-0 sudo[30969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:48 compute-0 python3[30971]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:48 compute-0 sudo[30969]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:49 compute-0 sudo[31042]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikcmqawxplcefxdwruzmqtfgxawqslto ; /usr/bin/python3'
Nov 22 05:02:49 compute-0 sudo[31042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:49 compute-0 python3[31044]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:49 compute-0 sudo[31042]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:49 compute-0 sudo[31068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvxcjalvgmemqjmifvcmldndhecnyzfb ; /usr/bin/python3'
Nov 22 05:02:49 compute-0 sudo[31068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:49 compute-0 python3[31070]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:49 compute-0 sudo[31068]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:50 compute-0 sudo[31141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpgapmveltsfgvnoksgfanmtbpbkesje ; /usr/bin/python3'
Nov 22 05:02:50 compute-0 sudo[31141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:50 compute-0 python3[31143]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:50 compute-0 sudo[31141]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:50 compute-0 sudo[31167]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfsaznvqnejyaqbkygqfvommfqdpwuyr ; /usr/bin/python3'
Nov 22 05:02:50 compute-0 sudo[31167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:50 compute-0 python3[31169]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:50 compute-0 sudo[31167]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:51 compute-0 sudo[31240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehgvfecwxfwoxvejldgznxruyutvkpld ; /usr/bin/python3'
Nov 22 05:02:51 compute-0 sudo[31240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:51 compute-0 python3[31242]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:51 compute-0 sudo[31240]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:51 compute-0 sudo[31266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlobgbwhjhzqsttbsetsliwhxneobttv ; /usr/bin/python3'
Nov 22 05:02:51 compute-0 sudo[31266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:51 compute-0 python3[31268]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:51 compute-0 sudo[31266]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:52 compute-0 sudo[31339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzryzqxyngzclvhdzfiukeecwvcxebvy ; /usr/bin/python3'
Nov 22 05:02:52 compute-0 sudo[31339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:52 compute-0 python3[31341]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:52 compute-0 sudo[31339]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:52 compute-0 sudo[31365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-patiaawfqkbklnmzngecouyiraxgowlm ; /usr/bin/python3'
Nov 22 05:02:52 compute-0 sudo[31365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:52 compute-0 python3[31367]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:52 compute-0 sudo[31365]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:52 compute-0 sudo[31438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjmadxbaoayookytqtacwjpvzrpsagw ; /usr/bin/python3'
Nov 22 05:02:52 compute-0 sudo[31438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:52 compute-0 python3[31440]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:52 compute-0 sudo[31438]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:53 compute-0 sudo[31464]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzekcchwjbajemniaqwtxkixvhgxsopk ; /usr/bin/python3'
Nov 22 05:02:53 compute-0 sudo[31464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:53 compute-0 python3[31466]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:53 compute-0 sudo[31464]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:53 compute-0 sudo[31537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgajsrpewbkkpoqffbvswivtkyzedjbo ; /usr/bin/python3'
Nov 22 05:02:53 compute-0 sudo[31537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:53 compute-0 python3[31539]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:53 compute-0 sudo[31537]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:53 compute-0 sudo[31563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwjaksrtupbgsjshfykjkceivarkpilw ; /usr/bin/python3'
Nov 22 05:02:53 compute-0 sudo[31563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:53 compute-0 python3[31565]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:02:53 compute-0 sudo[31563]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:54 compute-0 sudo[31636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohqgyibquilupclnrnbpacrbahimlwng ; /usr/bin/python3'
Nov 22 05:02:54 compute-0 sudo[31636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:02:54 compute-0 python3[31638]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763787768.4770675-33558-11976610920953/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:02:54 compute-0 sudo[31636]: pam_unix(sudo:session): session closed for user root
Nov 22 05:02:57 compute-0 sshd-session[31663]: Connection closed by 192.168.122.11 port 55920 [preauth]
Nov 22 05:02:57 compute-0 sshd-session[31664]: Connection closed by 192.168.122.11 port 55926 [preauth]
Nov 22 05:02:57 compute-0 sshd-session[31665]: Unable to negotiate with 192.168.122.11 port 55936: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 05:02:57 compute-0 sshd-session[31666]: Unable to negotiate with 192.168.122.11 port 55944: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 05:02:57 compute-0 sshd-session[31667]: Unable to negotiate with 192.168.122.11 port 55960: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 05:03:06 compute-0 python3[31696]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:03:31 compute-0 sshd-session[31698]: Invalid user node from 80.94.92.166 port 45538
Nov 22 05:03:31 compute-0 sshd-session[31698]: Connection closed by invalid user node 80.94.92.166 port 45538 [preauth]
Nov 22 05:05:49 compute-0 sshd-session[31701]: Invalid user solv from 80.94.92.166 port 48146
Nov 22 05:05:49 compute-0 sshd-session[31701]: Connection closed by invalid user solv 80.94.92.166 port 48146 [preauth]
Nov 22 05:07:01 compute-0 anacron[30771]: Job `cron.daily' started
Nov 22 05:07:01 compute-0 anacron[30771]: Job `cron.daily' terminated
Nov 22 05:08:06 compute-0 sshd-session[30782]: Received disconnect from 38.102.83.69 port 56510:11: disconnected by user
Nov 22 05:08:06 compute-0 sshd-session[30782]: Disconnected from user zuul 38.102.83.69 port 56510
Nov 22 05:08:06 compute-0 sshd-session[30779]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:08:06 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 22 05:08:06 compute-0 systemd[1]: session-6.scope: Consumed 5.881s CPU time.
Nov 22 05:08:06 compute-0 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Nov 22 05:08:06 compute-0 systemd-logind[798]: Removed session 6.
Nov 22 05:08:09 compute-0 sshd-session[31707]: Invalid user solv from 80.94.92.166 port 50792
Nov 22 05:08:09 compute-0 sshd-session[31707]: Connection closed by invalid user solv 80.94.92.166 port 50792 [preauth]
Nov 22 05:08:17 compute-0 sshd-session[31709]: Connection closed by authenticating user root 123.253.22.30 port 49582 [preauth]
Nov 22 05:10:29 compute-0 sshd-session[31711]: Invalid user solv from 80.94.92.166 port 53428
Nov 22 05:10:30 compute-0 sshd-session[31711]: Connection closed by invalid user solv 80.94.92.166 port 53428 [preauth]
Nov 22 05:12:54 compute-0 sshd-session[31714]: Invalid user solana from 80.94.92.166 port 56022
Nov 22 05:12:55 compute-0 sshd-session[31714]: Connection closed by invalid user solana 80.94.92.166 port 56022 [preauth]
Nov 22 05:14:12 compute-0 sshd-session[31718]: Connection closed by authenticating user root 123.253.22.30 port 33364 [preauth]
Nov 22 05:15:20 compute-0 sshd-session[31720]: Invalid user solana from 80.94.92.166 port 58604
Nov 22 05:15:20 compute-0 sshd-session[31720]: Connection closed by invalid user solana 80.94.92.166 port 58604 [preauth]
Nov 22 05:15:27 compute-0 sshd-session[31722]: Accepted publickey for zuul from 192.168.122.30 port 50978 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:15:27 compute-0 systemd-logind[798]: New session 7 of user zuul.
Nov 22 05:15:27 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 22 05:15:27 compute-0 sshd-session[31722]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:15:28 compute-0 python3.9[31875]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:15:30 compute-0 sudo[32054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkgevvmwoimkxdokozdgkcuzyqqtnmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788529.5574048-32-268959162895745/AnsiballZ_command.py'
Nov 22 05:15:30 compute-0 sudo[32054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:15:30 compute-0 python3.9[32056]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:15:38 compute-0 sudo[32054]: pam_unix(sudo:session): session closed for user root
Nov 22 05:15:38 compute-0 sshd-session[31725]: Connection closed by 192.168.122.30 port 50978
Nov 22 05:15:38 compute-0 sshd-session[31722]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:15:38 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 22 05:15:38 compute-0 systemd[1]: session-7.scope: Consumed 8.245s CPU time.
Nov 22 05:15:38 compute-0 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Nov 22 05:15:38 compute-0 systemd-logind[798]: Removed session 7.
Nov 22 05:15:55 compute-0 sshd-session[32114]: Accepted publickey for zuul from 192.168.122.30 port 45796 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:15:55 compute-0 systemd-logind[798]: New session 8 of user zuul.
Nov 22 05:15:55 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 22 05:15:55 compute-0 sshd-session[32114]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:15:55 compute-0 python3.9[32267]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 05:15:57 compute-0 python3.9[32441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:15:57 compute-0 sudo[32591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazcnjialrmhgpocbhnrfrpbvidqnnse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788557.2998128-45-60610141983343/AnsiballZ_command.py'
Nov 22 05:15:57 compute-0 sudo[32591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:15:58 compute-0 python3.9[32593]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:15:58 compute-0 sudo[32591]: pam_unix(sudo:session): session closed for user root
Nov 22 05:15:58 compute-0 sudo[32744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhiwmoxpuwsbctuxwinbdlwbepjjxazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788558.4681022-57-232895570271498/AnsiballZ_stat.py'
Nov 22 05:15:58 compute-0 sudo[32744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:15:59 compute-0 python3.9[32746]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:15:59 compute-0 sudo[32744]: pam_unix(sudo:session): session closed for user root
Nov 22 05:15:59 compute-0 sudo[32896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipgctbsaxhkdubrtfqakmmfdvpjkyqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788559.3966055-65-84594290172349/AnsiballZ_file.py'
Nov 22 05:15:59 compute-0 sudo[32896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:00 compute-0 python3.9[32898]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:16:00 compute-0 sudo[32896]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:00 compute-0 sudo[33048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kinidecvccrvvdkhtncvkibkddlwsigo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788560.361447-73-225012149061364/AnsiballZ_stat.py'
Nov 22 05:16:00 compute-0 sudo[33048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:00 compute-0 python3.9[33050]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:16:00 compute-0 sudo[33048]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:01 compute-0 sudo[33171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wriypppyfnxapxjmvfstomuqphyutith ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788560.361447-73-225012149061364/AnsiballZ_copy.py'
Nov 22 05:16:01 compute-0 sudo[33171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:01 compute-0 python3.9[33173]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788560.361447-73-225012149061364/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:16:01 compute-0 sudo[33171]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:02 compute-0 sudo[33323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwsaszmojiouyoajpktjolzqlitaktn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788561.737607-88-58935450204108/AnsiballZ_setup.py'
Nov 22 05:16:02 compute-0 sudo[33323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:02 compute-0 python3.9[33325]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:16:02 compute-0 sudo[33323]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:03 compute-0 sudo[33479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiklddrgcpdzprfuyzwdgvcmirepmbyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788562.7399952-96-174326463074006/AnsiballZ_file.py'
Nov 22 05:16:03 compute-0 sudo[33479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:03 compute-0 python3.9[33481]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:16:03 compute-0 sudo[33479]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:03 compute-0 sudo[33631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okkdfvszkejzctvtzmubfwsmjqnqbeod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788563.4637942-105-197653358472645/AnsiballZ_file.py'
Nov 22 05:16:03 compute-0 sudo[33631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:03 compute-0 python3.9[33633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:16:03 compute-0 sudo[33631]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:04 compute-0 python3.9[33783]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:16:08 compute-0 python3.9[34036]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:16:09 compute-0 python3.9[34186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:16:10 compute-0 python3.9[34340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:16:11 compute-0 sudo[34496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evficonalnilemmmngjemztdcmwjzilp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788570.7855988-153-226923155905335/AnsiballZ_setup.py'
Nov 22 05:16:11 compute-0 sudo[34496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:11 compute-0 python3.9[34498]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:16:11 compute-0 sudo[34496]: pam_unix(sudo:session): session closed for user root
Nov 22 05:16:12 compute-0 sudo[34580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvwtcqklannkmmmixmxizkwxigrmzxys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788570.7855988-153-226923155905335/AnsiballZ_dnf.py'
Nov 22 05:16:12 compute-0 sudo[34580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:16:12 compute-0 python3.9[34582]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:16:55 compute-0 systemd[1]: Reloading.
Nov 22 05:16:55 compute-0 systemd-rc-local-generator[34782]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:16:55 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 22 05:16:55 compute-0 systemd[1]: Reloading.
Nov 22 05:16:55 compute-0 systemd-rc-local-generator[34819]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:16:55 compute-0 systemd[1]: Starting dnf makecache...
Nov 22 05:16:55 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 22 05:16:55 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 22 05:16:55 compute-0 systemd[1]: Reloading.
Nov 22 05:16:55 compute-0 systemd-rc-local-generator[34857]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:16:56 compute-0 dnf[34831]: Failed determining last makecache time.
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-barbican-42b4c41831408a8e323 115 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 145 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-cinder-1c00d6490d88e436f26ef  14 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-stevedore-c4acc5639fd2329372142 141 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-observabilityclient-2f31846d73c 129 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:16:56 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-os-net-config-bbae2ed8a159b0435a473f38 134 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 137 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-designate-tests-tempest-347fdbc 139 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-glance-1fd12c29b339f30fe823e 137 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 107 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-manila-3c01b7181572c95dac462 153 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-whitebox-neutron-tests-tempest- 157 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-octavia-ba397f07a7331190208c 152 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-watcher-c014f81a8647287f6dcc 152 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-tcib-1124124ec06aadbac34f0d340b 164 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 154 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-swift-dc98a8463506ac520c469a 141 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-python-tempestconf-8515371b7cceebd4282 143 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: delorean-openstack-heat-ui-013accbfd179753bc3f0 142 kB/s | 3.0 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: CentOS Stream 9 - BaseOS                         77 kB/s | 7.3 kB     00:00
Nov 22 05:16:56 compute-0 dnf[34831]: CentOS Stream 9 - AppStream                      76 kB/s | 7.4 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: CentOS Stream 9 - CRB                            45 kB/s | 7.2 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: CentOS Stream 9 - Extras packages                67 kB/s | 8.3 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: dlrn-antelope-testing                           110 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: dlrn-antelope-build-deps                        125 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: centos9-rabbitmq                                110 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: centos9-storage                                 115 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: centos9-opstools                                115 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: NFV SIG OpenvSwitch                             109 kB/s | 3.0 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: repo-setup-centos-appstream                     162 kB/s | 4.4 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: repo-setup-centos-baseos                        191 kB/s | 3.9 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: repo-setup-centos-highavailability              176 kB/s | 3.9 kB     00:00
Nov 22 05:16:57 compute-0 dnf[34831]: repo-setup-centos-powertools                    190 kB/s | 4.3 kB     00:00
Nov 22 05:16:58 compute-0 dnf[34831]: Extra Packages for Enterprise Linux 9 - x86_64  288 kB/s |  33 kB     00:00
Nov 22 05:17:00 compute-0 dnf[34831]: Extra Packages for Enterprise Linux 9 - x86_64  9.3 MB/s |  20 MB     00:02
Nov 22 05:17:08 compute-0 dnf[34831]: Metadata cache created.
Nov 22 05:17:09 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 05:17:09 compute-0 systemd[1]: Finished dnf makecache.
Nov 22 05:17:09 compute-0 systemd[1]: dnf-makecache.service: Consumed 10.231s CPU time.
Nov 22 05:17:39 compute-0 sshd-session[35079]: Invalid user firedancer from 80.94.92.166 port 32998
Nov 22 05:17:39 compute-0 sshd-session[35079]: Connection closed by invalid user firedancer 80.94.92.166 port 32998 [preauth]
Nov 22 05:18:01 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:18:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:18:01 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 22 05:18:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:18:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:18:02 compute-0 systemd[1]: Reloading.
Nov 22 05:18:02 compute-0 systemd-rc-local-generator[35229]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:18:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:18:03 compute-0 sudo[34580]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:03 compute-0 sudo[36141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trpjithkjtlskrvbkqljrqdjpkxhqbsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788683.4794226-165-212150788623196/AnsiballZ_command.py'
Nov 22 05:18:03 compute-0 sudo[36141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:18:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:18:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.349s CPU time.
Nov 22 05:18:03 compute-0 systemd[1]: run-r222ca1f2ca3048cda84e1c39139a18fc.service: Deactivated successfully.
Nov 22 05:18:03 compute-0 python3.9[36143]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:05 compute-0 sudo[36141]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:05 compute-0 sudo[36423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdmdkrupkjgftxrfeqbaptifmdsxrxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788685.2509377-173-123207822881527/AnsiballZ_selinux.py'
Nov 22 05:18:05 compute-0 sudo[36423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:06 compute-0 python3.9[36425]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 05:18:06 compute-0 sudo[36423]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:06 compute-0 sudo[36575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibnsqceuyphmxosyxlqjaxjljhzswqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788686.531347-184-148827942422476/AnsiballZ_command.py'
Nov 22 05:18:06 compute-0 sudo[36575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:07 compute-0 python3.9[36577]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 05:18:07 compute-0 sudo[36575]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:08 compute-0 sudo[36729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozelmxqmaipltsnvacdjtjvoabexwhyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788688.073799-192-126908140641622/AnsiballZ_file.py'
Nov 22 05:18:08 compute-0 sudo[36729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:09 compute-0 python3.9[36731]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:18:09 compute-0 sudo[36729]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:10 compute-0 sudo[36881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrvtdacqpbxgjdgbtewwoygoeeoxnwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788690.0564837-200-132743872002762/AnsiballZ_mount.py'
Nov 22 05:18:10 compute-0 sudo[36881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:10 compute-0 python3.9[36883]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 05:18:10 compute-0 sudo[36881]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:11 compute-0 sudo[37033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhwtzxriejagychkgnrbgqmigivkyat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788691.6166143-228-84133465102807/AnsiballZ_file.py'
Nov 22 05:18:11 compute-0 sudo[37033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:12 compute-0 python3.9[37035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:18:12 compute-0 sudo[37033]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:12 compute-0 sudo[37185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmdfomegfayybnvdzzjobxvqwoonyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788692.2808309-236-78822947103085/AnsiballZ_stat.py'
Nov 22 05:18:12 compute-0 sudo[37185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:12 compute-0 python3.9[37187]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:18:12 compute-0 sudo[37185]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:13 compute-0 sudo[37308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtuaiwdrttujjcbhbefswqtkbmaqwzdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788692.2808309-236-78822947103085/AnsiballZ_copy.py'
Nov 22 05:18:13 compute-0 sudo[37308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:16 compute-0 python3.9[37310]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788692.2808309-236-78822947103085/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:18:16 compute-0 sudo[37308]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:17 compute-0 sudo[37460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebgtcnggwbbjonqqumsbjeskxqftciqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788697.3773894-260-72454476832234/AnsiballZ_stat.py'
Nov 22 05:18:17 compute-0 sudo[37460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:18 compute-0 python3.9[37462]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:18:18 compute-0 sudo[37460]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:18 compute-0 sudo[37612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxetojaepiqlaubekvvpanjahlmcgwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788698.198123-268-104949493916264/AnsiballZ_command.py'
Nov 22 05:18:18 compute-0 sudo[37612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:18 compute-0 python3.9[37614]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:18 compute-0 sudo[37612]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:19 compute-0 sudo[37765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdqrpyrwplvvygtpvgjeeowgqdbvjxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788698.9949994-276-276555634938310/AnsiballZ_file.py'
Nov 22 05:18:19 compute-0 sudo[37765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:19 compute-0 python3.9[37767]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:18:19 compute-0 sudo[37765]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:20 compute-0 sudo[37917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjambzximelvwwixqeevaxozduyeypl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788699.9718285-287-178699719404223/AnsiballZ_getent.py'
Nov 22 05:18:20 compute-0 sudo[37917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:20 compute-0 python3.9[37919]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 05:18:20 compute-0 sudo[37917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:18:21 compute-0 sudo[38071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgmmstmbtoetlyuidhlkpogmtoxwvvaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788700.7581193-295-265143651579344/AnsiballZ_group.py'
Nov 22 05:18:21 compute-0 sudo[38071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:21 compute-0 python3.9[38073]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:18:21 compute-0 groupadd[38074]: group added to /etc/group: name=qemu, GID=107
Nov 22 05:18:21 compute-0 groupadd[38074]: group added to /etc/gshadow: name=qemu
Nov 22 05:18:21 compute-0 groupadd[38074]: new group: name=qemu, GID=107
Nov 22 05:18:21 compute-0 sudo[38071]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:22 compute-0 sudo[38229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfnigdslqlnxyhhacbgribrmocyzwlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788701.7485702-303-281099355157023/AnsiballZ_user.py'
Nov 22 05:18:22 compute-0 sudo[38229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:22 compute-0 python3.9[38231]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 05:18:22 compute-0 useradd[38233]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 05:18:22 compute-0 sudo[38229]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:23 compute-0 sudo[38389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwlokczmtswslzvossxahgwfkwicivg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788702.866352-311-96119346200014/AnsiballZ_getent.py'
Nov 22 05:18:23 compute-0 sudo[38389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:23 compute-0 python3.9[38391]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 05:18:23 compute-0 sudo[38389]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:24 compute-0 sudo[38542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmvmfngmcgyzkupstuklpaaftgleyeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788703.7107928-319-137649173407391/AnsiballZ_group.py'
Nov 22 05:18:24 compute-0 sudo[38542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:24 compute-0 python3.9[38544]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:18:24 compute-0 groupadd[38545]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 22 05:18:24 compute-0 groupadd[38545]: group added to /etc/gshadow: name=hugetlbfs
Nov 22 05:18:24 compute-0 groupadd[38545]: new group: name=hugetlbfs, GID=42477
Nov 22 05:18:24 compute-0 sudo[38542]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:24 compute-0 sudo[38700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snbgazlijenpvsqthdhddzrlojadvkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788704.5416126-328-233130610786091/AnsiballZ_file.py'
Nov 22 05:18:24 compute-0 sudo[38700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:25 compute-0 python3.9[38702]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 05:18:25 compute-0 sudo[38700]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:25 compute-0 sudo[38852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldftltdcmqczovukfswbvqmmavkxapqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788705.527108-339-63311425358434/AnsiballZ_dnf.py'
Nov 22 05:18:25 compute-0 sudo[38852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:26 compute-0 python3.9[38854]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:18:27 compute-0 sudo[38852]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:28 compute-0 sudo[39005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwawlqcduttqsrqswnoghabjnmbttagg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788707.9516022-347-253631820837424/AnsiballZ_file.py'
Nov 22 05:18:28 compute-0 sudo[39005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:28 compute-0 python3.9[39007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:18:28 compute-0 sudo[39005]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:28 compute-0 sudo[39157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtomjbvuhlmrcgruqofzbifzjrsgibzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788708.616697-355-27907899729112/AnsiballZ_stat.py'
Nov 22 05:18:28 compute-0 sudo[39157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:29 compute-0 python3.9[39159]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:18:29 compute-0 sudo[39157]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:29 compute-0 sudo[39280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiooupeqwxtsjhuasarixygmekwgmhye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788708.616697-355-27907899729112/AnsiballZ_copy.py'
Nov 22 05:18:29 compute-0 sudo[39280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:29 compute-0 python3.9[39282]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788708.616697-355-27907899729112/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:18:29 compute-0 sudo[39280]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:30 compute-0 sudo[39432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciyycamqtgadccrplxglcdivcowrneun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788710.0300398-370-30753069142229/AnsiballZ_systemd.py'
Nov 22 05:18:30 compute-0 sudo[39432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:31 compute-0 python3.9[39434]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:18:31 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 05:18:31 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 22 05:18:31 compute-0 kernel: Bridge firewalling registered
Nov 22 05:18:31 compute-0 systemd-modules-load[39438]: Inserted module 'br_netfilter'
Nov 22 05:18:31 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 05:18:31 compute-0 sudo[39432]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:31 compute-0 sudo[39591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumusotjfymntwzhvbvtlaeisriuffrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788711.4162822-378-217784410404142/AnsiballZ_stat.py'
Nov 22 05:18:31 compute-0 sudo[39591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:31 compute-0 python3.9[39593]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:18:31 compute-0 sudo[39591]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:32 compute-0 sudo[39714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilpmxfkfptkdcnjnqppoeanetyxgswuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788711.4162822-378-217784410404142/AnsiballZ_copy.py'
Nov 22 05:18:32 compute-0 sudo[39714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:32 compute-0 python3.9[39716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788711.4162822-378-217784410404142/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:18:32 compute-0 sudo[39714]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:33 compute-0 sudo[39866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fstomoajnpzrhadmwaddhavebskjkgkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788713.0422199-396-50345631549849/AnsiballZ_dnf.py'
Nov 22 05:18:33 compute-0 sudo[39866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:33 compute-0 python3.9[39868]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:18:36 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:18:36 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:18:37 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:18:37 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:18:37 compute-0 systemd[1]: Reloading.
Nov 22 05:18:37 compute-0 systemd-rc-local-generator[39934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:18:37 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:18:37 compute-0 sudo[39866]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:38 compute-0 python3.9[41237]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:18:39 compute-0 python3.9[42228]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 05:18:40 compute-0 python3.9[42989]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:18:40 compute-0 sudo[43826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxfqknjowvmkuezinmgxetpwvgenfrvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788720.4008367-435-163238648099368/AnsiballZ_command.py'
Nov 22 05:18:40 compute-0 sudo[43826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:40 compute-0 python3.9[43843]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 05:18:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:18:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:18:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.836s CPU time.
Nov 22 05:18:41 compute-0 systemd[1]: run-r41a3292441ee4e3d89ab19df50219633.service: Deactivated successfully.
Nov 22 05:18:41 compute-0 systemd[1]: Starting Authorization Manager...
Nov 22 05:18:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 05:18:41 compute-0 polkitd[44246]: Started polkitd version 0.117
Nov 22 05:18:41 compute-0 polkitd[44246]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 05:18:41 compute-0 polkitd[44246]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 05:18:41 compute-0 polkitd[44246]: Finished loading, compiling and executing 2 rules
Nov 22 05:18:41 compute-0 systemd[1]: Started Authorization Manager.
Nov 22 05:18:41 compute-0 polkitd[44246]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 22 05:18:41 compute-0 sudo[43826]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:42 compute-0 sudo[44414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnpsowbybhaissamurzgkftytmnbcvtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788721.849363-444-187605994201189/AnsiballZ_systemd.py'
Nov 22 05:18:42 compute-0 sudo[44414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:42 compute-0 python3.9[44416]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:18:42 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 05:18:42 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 05:18:42 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 05:18:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 05:18:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 05:18:42 compute-0 sudo[44414]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:43 compute-0 python3.9[44577]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 05:18:45 compute-0 sudo[44727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xguamrhcbwayebrkuowlfujnxfcehaol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788725.5989895-501-183574454402592/AnsiballZ_systemd.py'
Nov 22 05:18:45 compute-0 sudo[44727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:46 compute-0 python3.9[44729]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:18:46 compute-0 systemd[1]: Reloading.
Nov 22 05:18:46 compute-0 systemd-rc-local-generator[44757]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:18:46 compute-0 sudo[44727]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:46 compute-0 sudo[44915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swmrdfieubzmihxqrqazhvpcxvubhphy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788726.6386425-501-85176396834271/AnsiballZ_systemd.py'
Nov 22 05:18:46 compute-0 sudo[44915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:47 compute-0 python3.9[44917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:18:47 compute-0 systemd[1]: Reloading.
Nov 22 05:18:47 compute-0 systemd-rc-local-generator[44945]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:18:47 compute-0 sudo[44915]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:48 compute-0 sudo[45104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivdgtccajfshzxpqwomtrtdkontmtfgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788727.9213798-517-229089736586539/AnsiballZ_command.py'
Nov 22 05:18:48 compute-0 sudo[45104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:48 compute-0 python3.9[45106]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:48 compute-0 sudo[45104]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:49 compute-0 sudo[45257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klavdyttzmjbeawntnmqlditoqkxyauu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788728.5538776-525-200045906417395/AnsiballZ_command.py'
Nov 22 05:18:49 compute-0 sudo[45257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:49 compute-0 python3.9[45259]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:49 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 22 05:18:49 compute-0 sudo[45257]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:49 compute-0 sudo[45410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyabhkfmedtzpcrtqxorylvtmdtwvpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788729.5374672-533-170167812664243/AnsiballZ_command.py'
Nov 22 05:18:49 compute-0 sudo[45410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:50 compute-0 python3.9[45412]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:51 compute-0 sudo[45410]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:51 compute-0 sudo[45572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jildssxnszwnjeiiwoztguvoyhyxqoel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788731.7500875-541-191629299359694/AnsiballZ_command.py'
Nov 22 05:18:51 compute-0 sudo[45572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:52 compute-0 python3.9[45574]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:18:52 compute-0 sudo[45572]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:52 compute-0 sudo[45725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirvvykuvcdascoagjgdcfemjwihixup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788732.3784852-549-142296227142773/AnsiballZ_systemd.py'
Nov 22 05:18:52 compute-0 sudo[45725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:18:53 compute-0 python3.9[45727]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:18:53 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 05:18:53 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 05:18:53 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 22 05:18:53 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 22 05:18:53 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 05:18:53 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 22 05:18:53 compute-0 sudo[45725]: pam_unix(sudo:session): session closed for user root
Nov 22 05:18:53 compute-0 sshd-session[32117]: Connection closed by 192.168.122.30 port 45796
Nov 22 05:18:53 compute-0 sshd-session[32114]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:18:53 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 22 05:18:53 compute-0 systemd[1]: session-8.scope: Consumed 2min 16.497s CPU time.
Nov 22 05:18:53 compute-0 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Nov 22 05:18:53 compute-0 systemd-logind[798]: Removed session 8.
Nov 22 05:18:58 compute-0 sshd-session[45757]: Accepted publickey for zuul from 192.168.122.30 port 39580 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:18:58 compute-0 systemd-logind[798]: New session 9 of user zuul.
Nov 22 05:18:58 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 22 05:18:58 compute-0 sshd-session[45757]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:18:59 compute-0 python3.9[45910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:19:00 compute-0 sudo[46064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jugcdgmosstzkfagcyssyqimuswyvrok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788740.3278594-36-162389926017177/AnsiballZ_getent.py'
Nov 22 05:19:00 compute-0 sudo[46064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:00 compute-0 python3.9[46066]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 05:19:00 compute-0 sudo[46064]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:01 compute-0 sudo[46217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqruxztqaicszbwhpprtbafunltqhcak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788741.1619077-44-168512303239601/AnsiballZ_group.py'
Nov 22 05:19:01 compute-0 sudo[46217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:01 compute-0 python3.9[46219]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:19:01 compute-0 groupadd[46220]: group added to /etc/group: name=openvswitch, GID=42476
Nov 22 05:19:01 compute-0 groupadd[46220]: group added to /etc/gshadow: name=openvswitch
Nov 22 05:19:01 compute-0 groupadd[46220]: new group: name=openvswitch, GID=42476
Nov 22 05:19:01 compute-0 sudo[46217]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:02 compute-0 sudo[46375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzplrsuwsoljwqirsnjmnuhvkoioqmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788741.9997256-52-41590659626709/AnsiballZ_user.py'
Nov 22 05:19:02 compute-0 sudo[46375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:02 compute-0 python3.9[46377]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 05:19:02 compute-0 useradd[46379]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 05:19:02 compute-0 useradd[46379]: add 'openvswitch' to group 'hugetlbfs'
Nov 22 05:19:02 compute-0 useradd[46379]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 22 05:19:02 compute-0 sudo[46375]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:03 compute-0 sudo[46535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztprqcypisibmifjgmepczyyelgvvfbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788743.088876-62-261145357786907/AnsiballZ_setup.py'
Nov 22 05:19:03 compute-0 sudo[46535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:03 compute-0 python3.9[46537]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:19:04 compute-0 sudo[46535]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:04 compute-0 sudo[46619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwzsjyotvcfhspxzvaxayuykyfiunymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788743.088876-62-261145357786907/AnsiballZ_dnf.py'
Nov 22 05:19:04 compute-0 sudo[46619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:04 compute-0 python3.9[46621]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 05:19:06 compute-0 sudo[46619]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:07 compute-0 sudo[46782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pftvggydkuiudzcjlhvjpfkbibhfmoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788746.959144-76-168046377382363/AnsiballZ_dnf.py'
Nov 22 05:19:07 compute-0 sudo[46782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:07 compute-0 python3.9[46784]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:19:20 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:19:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:19:20 compute-0 groupadd[46807]: group added to /etc/group: name=unbound, GID=993
Nov 22 05:19:20 compute-0 groupadd[46807]: group added to /etc/gshadow: name=unbound
Nov 22 05:19:20 compute-0 groupadd[46807]: new group: name=unbound, GID=993
Nov 22 05:19:21 compute-0 useradd[46814]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 22 05:19:21 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 22 05:19:21 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 22 05:19:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:19:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:19:24 compute-0 systemd[1]: Reloading.
Nov 22 05:19:24 compute-0 systemd-rc-local-generator[47308]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:19:24 compute-0 systemd-sysv-generator[47313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:19:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:19:26 compute-0 sudo[46782]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:27 compute-0 sudo[47879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beuinisgtsjqxcbvjfxefmrosyrvpuqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788766.5763025-84-143904762382221/AnsiballZ_systemd.py'
Nov 22 05:19:27 compute-0 sudo[47879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:27 compute-0 python3.9[47881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:19:27 compute-0 systemd[1]: Reloading.
Nov 22 05:19:27 compute-0 systemd-sysv-generator[47916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:19:27 compute-0 systemd-rc-local-generator[47913]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:19:27 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 22 05:19:27 compute-0 chown[47924]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 22 05:19:27 compute-0 ovs-ctl[47929]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 22 05:19:28 compute-0 ovs-ctl[47929]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 22 05:19:28 compute-0 ovs-ctl[47929]: Starting ovsdb-server [  OK  ]
Nov 22 05:19:28 compute-0 ovs-vsctl[47978]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 22 05:19:28 compute-0 ovs-vsctl[47994]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"772af8e6-0f26-443e-a044-9109439e729d\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 22 05:19:28 compute-0 ovs-ctl[47929]: Configuring Open vSwitch system IDs [  OK  ]
Nov 22 05:19:28 compute-0 ovs-vsctl[48001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 05:19:28 compute-0 ovs-ctl[47929]: Enabling remote OVSDB managers [  OK  ]
Nov 22 05:19:28 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 22 05:19:28 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 22 05:19:28 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 22 05:19:28 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 22 05:19:28 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 22 05:19:28 compute-0 ovs-ctl[48048]: Inserting openvswitch module [  OK  ]
Nov 22 05:19:28 compute-0 ovs-ctl[48017]: Starting ovs-vswitchd [  OK  ]
Nov 22 05:19:28 compute-0 ovs-ctl[48017]: Enabling remote OVSDB managers [  OK  ]
Nov 22 05:19:28 compute-0 ovs-vsctl[48066]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 05:19:28 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 22 05:19:28 compute-0 systemd[1]: Starting Open vSwitch...
Nov 22 05:19:28 compute-0 systemd[1]: Finished Open vSwitch.
Nov 22 05:19:28 compute-0 sudo[47879]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:29 compute-0 python3.9[48217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:19:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:19:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:19:29 compute-0 systemd[1]: run-r2ded0afb5af74997a975c6ba23172d28.service: Deactivated successfully.
Nov 22 05:19:30 compute-0 sudo[48368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lysvvngarzjhikemclpzelbboppzynpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788770.0010161-102-95093397107339/AnsiballZ_sefcontext.py'
Nov 22 05:19:30 compute-0 sudo[48368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:30 compute-0 python3.9[48370]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 05:19:32 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:19:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:19:33 compute-0 sudo[48368]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:34 compute-0 python3.9[48525]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:19:34 compute-0 sudo[48681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtjjxcgcdaueuxijbmrbhbfuynxxcujd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788774.5499485-120-217145657491785/AnsiballZ_dnf.py'
Nov 22 05:19:34 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 22 05:19:34 compute-0 sudo[48681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:35 compute-0 python3.9[48683]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:19:36 compute-0 sudo[48681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:37 compute-0 sudo[48834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkcoudqfsnsjfakzmrtsmtkxlpeivgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788776.9782786-128-79570255400497/AnsiballZ_command.py'
Nov 22 05:19:37 compute-0 sudo[48834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:37 compute-0 python3.9[48836]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:19:38 compute-0 sudo[48834]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:38 compute-0 sudo[49121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgehngpvcphgvxhgepgfuqpgrqqhvze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788778.520288-136-204715358110725/AnsiballZ_file.py'
Nov 22 05:19:38 compute-0 sudo[49121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:39 compute-0 python3.9[49123]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 05:19:39 compute-0 sudo[49121]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:40 compute-0 python3.9[49273]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:19:40 compute-0 sudo[49425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjrmwxxbxmrsvnlfqoxcdchszfbnramz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788780.2661898-152-129751241531366/AnsiballZ_dnf.py'
Nov 22 05:19:40 compute-0 sudo[49425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:40 compute-0 python3.9[49427]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:19:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:19:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:19:43 compute-0 systemd[1]: Reloading.
Nov 22 05:19:43 compute-0 systemd-rc-local-generator[49466]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:19:43 compute-0 systemd-sysv-generator[49470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:19:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:19:44 compute-0 sudo[49425]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:45 compute-0 sudo[49740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eislrugaocytyuwrgsovlpuybkuxajcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788784.948916-160-168177375234008/AnsiballZ_systemd.py'
Nov 22 05:19:45 compute-0 sudo[49740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:45 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:19:45 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:19:45 compute-0 systemd[1]: run-re73195c3ad3849ffa81f41914916eecd.service: Deactivated successfully.
Nov 22 05:19:45 compute-0 python3.9[49742]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:19:45 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 05:19:45 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 05:19:45 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 05:19:45 compute-0 systemd[1]: Stopping Network Manager...
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.6948] caught SIGTERM, shutting down normally.
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): canceled DHCP transaction
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.6961] dhcp4 (eth0): state changed no lease
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.6963] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 05:19:45 compute-0 NetworkManager[7192]: <info>  [1763788785.7035] exiting (success)
Nov 22 05:19:45 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 05:19:45 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 05:19:45 compute-0 systemd[1]: Stopped Network Manager.
Nov 22 05:19:45 compute-0 systemd[1]: NetworkManager.service: Consumed 19.078s CPU time, 4.1M memory peak, read 0B from disk, written 45.5K to disk.
Nov 22 05:19:45 compute-0 systemd[1]: Starting Network Manager...
Nov 22 05:19:45 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.7561] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:0ad7a365-484a-42b3-93c5-a59cf6bc29d9)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.7565] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.7622] manager[0x55a58f847090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 05:19:45 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 05:19:45 compute-0 systemd[1]: Started Hostname Service.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8529] hostname: hostname: using hostnamed
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8532] hostname: static hostname changed from (none) to "compute-0"
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8539] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8544] manager[0x55a58f847090]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8545] manager[0x55a58f847090]: rfkill: WWAN hardware radio set enabled
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8566] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8574] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8575] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8576] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8577] manager: Networking is enabled by state file
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8580] settings: Loaded settings plugin: keyfile (internal)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8584] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8613] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8626] dhcp: init: Using DHCP client 'internal'
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8629] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8634] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8641] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8648] device (lo): Activation: starting connection 'lo' (29f19999-cee5-4ca2-a804-2bcb67c28530)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8654] device (eth0): carrier: link connected
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8658] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8663] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8664] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8671] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8678] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8684] device (eth1): carrier: link connected
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8688] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8694] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b) (indicated)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8695] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8701] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8707] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 05:19:45 compute-0 systemd[1]: Started Network Manager.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8713] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8722] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8725] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8727] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8730] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8732] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8735] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8738] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8741] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8756] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8759] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8766] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8777] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8785] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8787] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8794] device (lo): Activation: successful, device activated.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8800] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8806] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 05:19:45 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8869] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8874] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8886] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8890] device (eth1): Activation: successful, device activated.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8913] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8916] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8921] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8924] device (eth0): Activation: successful, device activated.
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8928] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 05:19:45 compute-0 NetworkManager[49751]: <info>  [1763788785.8931] manager: startup complete
Nov 22 05:19:45 compute-0 sudo[49740]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:45 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 22 05:19:46 compute-0 sudo[49967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxhnrmdtqicomeoqlvouhsumpjqmfdtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788786.0621426-168-223294742284893/AnsiballZ_dnf.py'
Nov 22 05:19:46 compute-0 sudo[49967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:46 compute-0 python3.9[49969]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:19:50 compute-0 sshd-session[49985]: error: kex_exchange_identification: read: Connection reset by peer
Nov 22 05:19:50 compute-0 sshd-session[49985]: Connection reset by 45.140.17.97 port 1850
Nov 22 05:19:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:19:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:19:52 compute-0 systemd[1]: Reloading.
Nov 22 05:19:53 compute-0 systemd-rc-local-generator[50024]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:19:53 compute-0 systemd-sysv-generator[50027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:19:53 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:19:54 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:19:54 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:19:54 compute-0 systemd[1]: run-rd5fac863e67c40f9b184d3766b70390a.service: Deactivated successfully.
Nov 22 05:19:54 compute-0 sudo[49967]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:54 compute-0 sudo[50428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvuilquehkjtcajxkwhrptwaxuhmrruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788794.6455152-180-27280418078191/AnsiballZ_stat.py'
Nov 22 05:19:54 compute-0 sudo[50428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:55 compute-0 python3.9[50430]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:19:55 compute-0 sudo[50428]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:55 compute-0 sudo[50580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjykezxintdqqtydedvwdnkdpsjzgcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788795.3487775-189-98771799856366/AnsiballZ_ini_file.py'
Nov 22 05:19:55 compute-0 sudo[50580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:55 compute-0 python3.9[50582]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:19:56 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 05:19:56 compute-0 sudo[50580]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:56 compute-0 sudo[50734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prewrvrpyyeebsgfucyyfinyajislsxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788796.217127-199-275177228603858/AnsiballZ_ini_file.py'
Nov 22 05:19:56 compute-0 sudo[50734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:56 compute-0 python3.9[50736]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:19:56 compute-0 sudo[50734]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:57 compute-0 sudo[50886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqiennpmvwxaiskvvcgpruvsisxkmhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788796.8731458-199-268097475774249/AnsiballZ_ini_file.py'
Nov 22 05:19:57 compute-0 sudo[50886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:57 compute-0 python3.9[50888]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:19:57 compute-0 sudo[50886]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:57 compute-0 sudo[51038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxebqtilucmjcfqfdyzhgptropnmkolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788797.5430927-214-219109018978881/AnsiballZ_ini_file.py'
Nov 22 05:19:57 compute-0 sudo[51038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:58 compute-0 python3.9[51040]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:19:58 compute-0 sudo[51038]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:58 compute-0 sudo[51190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevyqmhpbngvvyjyiccqxxgficlgrnqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788798.1845872-214-242256071181175/AnsiballZ_ini_file.py'
Nov 22 05:19:58 compute-0 sudo[51190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:58 compute-0 python3.9[51192]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:19:58 compute-0 sudo[51190]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:59 compute-0 sudo[51342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrxokhmpkluynturofrzayhbcmdzqvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788798.8677533-229-186801681601953/AnsiballZ_stat.py'
Nov 22 05:19:59 compute-0 sudo[51342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:19:59 compute-0 python3.9[51344]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:19:59 compute-0 sudo[51342]: pam_unix(sudo:session): session closed for user root
Nov 22 05:19:59 compute-0 sudo[51465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swkbjkcnfqogguztxszxhmylzrdsvzfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788798.8677533-229-186801681601953/AnsiballZ_copy.py'
Nov 22 05:19:59 compute-0 sudo[51465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:00 compute-0 python3.9[51467]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788798.8677533-229-186801681601953/.source _original_basename=.5hom73rt follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:00 compute-0 sudo[51465]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:00 compute-0 sudo[51617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btyrkejgocxpqaqpjnqsubplzifvcjfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788800.3444998-244-219057327018417/AnsiballZ_file.py'
Nov 22 05:20:00 compute-0 sudo[51617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:00 compute-0 python3.9[51619]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:00 compute-0 sudo[51617]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:01 compute-0 sudo[51769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncygrqknpsogyvvchntujqdkavvvdzfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788801.0313811-252-88849098420504/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 22 05:20:01 compute-0 sudo[51769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:01 compute-0 python3.9[51771]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 22 05:20:01 compute-0 sudo[51769]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:02 compute-0 sudo[51921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzpefpoviqpnrntaiitkcanhvaqjggev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788801.9186552-261-236017003237542/AnsiballZ_file.py'
Nov 22 05:20:02 compute-0 sudo[51921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:02 compute-0 python3.9[51923]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:02 compute-0 sudo[51921]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:03 compute-0 sudo[52073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tegrgbyemhadnjaoltycrpgbhxkyzsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788802.7320178-271-106496709824994/AnsiballZ_stat.py'
Nov 22 05:20:03 compute-0 sudo[52073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:03 compute-0 sudo[52073]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:03 compute-0 sudo[52198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuyhaffuszimwtdrurazatcjilsbeudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788802.7320178-271-106496709824994/AnsiballZ_copy.py'
Nov 22 05:20:03 compute-0 sudo[52198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:03 compute-0 sudo[52198]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:04 compute-0 sshd-session[52174]: Invalid user solana from 80.94.92.166 port 35608
Nov 22 05:20:04 compute-0 sshd-session[52174]: Connection closed by invalid user solana 80.94.92.166 port 35608 [preauth]
Nov 22 05:20:04 compute-0 sudo[52350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhlkhfjtgafoanfypwlgkjzvbswhroy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788804.1049936-286-128738959186742/AnsiballZ_slurp.py'
Nov 22 05:20:04 compute-0 sudo[52350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:04 compute-0 python3.9[52352]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 22 05:20:04 compute-0 sudo[52350]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:05 compute-0 sudo[52525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycpcuznscmsaftbqaeehltwupbzaabj ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788804.8969402-295-247995704436509/async_wrapper.py j507233999034 300 /home/zuul/.ansible/tmp/ansible-tmp-1763788804.8969402-295-247995704436509/AnsiballZ_edpm_os_net_config.py _'
Nov 22 05:20:05 compute-0 sudo[52525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:05 compute-0 ansible-async_wrapper.py[52527]: Invoked with j507233999034 300 /home/zuul/.ansible/tmp/ansible-tmp-1763788804.8969402-295-247995704436509/AnsiballZ_edpm_os_net_config.py _
Nov 22 05:20:05 compute-0 ansible-async_wrapper.py[52530]: Starting module and watcher
Nov 22 05:20:05 compute-0 ansible-async_wrapper.py[52530]: Start watching 52531 (300)
Nov 22 05:20:05 compute-0 ansible-async_wrapper.py[52531]: Start module (52531)
Nov 22 05:20:05 compute-0 ansible-async_wrapper.py[52527]: Return async_wrapper task started.
Nov 22 05:20:05 compute-0 sudo[52525]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:05 compute-0 python3.9[52532]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 22 05:20:06 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 22 05:20:06 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 22 05:20:06 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 22 05:20:06 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 22 05:20:06 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9067] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9081] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9613] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9614] audit: op="connection-add" uuid="e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8" name="br-ex-br" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9629] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9630] audit: op="connection-add" uuid="6dbe5176-d69d-43d2-ae99-9cb9a4536e43" name="br-ex-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9642] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9643] audit: op="connection-add" uuid="523734da-6a1f-4d94-a351-29cd9eb90d3d" name="eth1-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9654] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9655] audit: op="connection-add" uuid="d861a3cf-32db-4ec4-8a45-c6981b01c962" name="vlan20-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9666] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9668] audit: op="connection-add" uuid="fcdea826-d26b-4a70-b0cc-3b8c842ef2cb" name="vlan21-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9678] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9679] audit: op="connection-add" uuid="5a268958-41b7-4275-9191-e164fec00046" name="vlan22-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9692] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9693] audit: op="connection-add" uuid="4a0634c3-6b1d-4758-a608-f5568b835867" name="vlan23-port" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9712] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9727] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9728] audit: op="connection-add" uuid="ca2e9ad1-e067-48e3-86e8-8c179e3a623c" name="br-ex-if" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9753] audit: op="connection-update" uuid="8d97a97e-ce0a-5c97-95d5-8291b500636b" name="ci-private-network" args="ipv6.dns,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type,connection.controller,connection.slave-type,connection.master,connection.port-type,connection.timestamp,ipv4.dns,ipv4.addresses,ipv4.routing-rules,ipv4.method,ipv4.routes,ipv4.never-default" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9768] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9770] audit: op="connection-add" uuid="29f723c1-2b0e-4942-a473-20482ebe0a3e" name="vlan20-if" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9784] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9785] audit: op="connection-add" uuid="5a95e5dc-7a2e-4194-8554-f486f4eab24d" name="vlan21-if" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9801] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9803] audit: op="connection-add" uuid="11ff7f4c-cd22-4253-a231-ee878e554849" name="vlan22-if" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9816] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9818] audit: op="connection-add" uuid="6a4b13ae-dd6e-4a6a-88f1-cf7926d0a6ce" name="vlan23-if" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9830] audit: op="connection-delete" uuid="b63a3bd3-2d39-3e26-9215-4f6c298d6a18" name="Wired connection 1" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9859] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9873] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9877] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9878] audit: op="connection-activate" uuid="e3ef8fdc-feaa-4740-bf30-fbac9c54c3a8" name="br-ex-br" pid=52533 uid=0 result="success"
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9879] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9885] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9889] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6dbe5176-d69d-43d2-ae99-9cb9a4536e43)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9890] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9895] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9900] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (523734da-6a1f-4d94-a351-29cd9eb90d3d)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9901] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9907] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9910] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d861a3cf-32db-4ec4-8a45-c6981b01c962)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9911] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9916] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9920] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (fcdea826-d26b-4a70-b0cc-3b8c842ef2cb)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9922] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9928] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9933] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (5a268958-41b7-4275-9191-e164fec00046)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9935] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9941] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9946] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4a0634c3-6b1d-4758-a608-f5568b835867)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9946] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9949] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9951] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9961] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9965] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ca2e9ad1-e067-48e3-86e8-8c179e3a623c)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9966] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9969] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9970] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9972] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9973] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9983] device (eth1): disconnecting for new activation request.
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9984] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9986] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9988] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9988] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9991] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9994] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9997] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (29f723c1-2b0e-4942-a473-20482ebe0a3e)
Nov 22 05:20:07 compute-0 NetworkManager[49751]: <info>  [1763788807.9998] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0000] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0001] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0002] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0005] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0007] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0013] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (5a95e5dc-7a2e-4194-8554-f486f4eab24d)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0014] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0016] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0017] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0018] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0021] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0024] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0027] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (11ff7f4c-cd22-4253-a231-ee878e554849)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0028] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0030] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0032] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0032] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0034] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0037] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0040] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (6a4b13ae-dd6e-4a6a-88f1-cf7926d0a6ce)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0041] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0043] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0044] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0045] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0046] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0055] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=52533 uid=0 result="success"
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0056] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0058] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0059] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0064] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0067] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0069] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0071] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0072] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0076] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0088] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0092] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0094] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0099] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0103] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 systemd-udevd[52537]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 05:20:08 compute-0 kernel: Timeout policy base is empty
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0106] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0107] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0111] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0115] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0119] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0121] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0126] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): canceled DHCP transaction
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0130] dhcp4 (eth0): state changed no lease
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0132] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0142] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0146] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52533 uid=0 result="fail" reason="Device is not activated"
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0151] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 22 05:20:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0203] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0216] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0222] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 22 05:20:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0268] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0279] device (eth1): disconnecting for new activation request.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0279] audit: op="connection-activate" uuid="8d97a97e-ce0a-5c97-95d5-8291b500636b" name="ci-private-network" pid=52533 uid=0 result="success"
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0286] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 05:20:08 compute-0 kernel: br-ex: entered promiscuous mode
Nov 22 05:20:08 compute-0 kernel: vlan22: entered promiscuous mode
Nov 22 05:20:08 compute-0 systemd-udevd[52539]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0531] device (eth1): Activation: starting connection 'ci-private-network' (8d97a97e-ce0a-5c97-95d5-8291b500636b)
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0535] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 kernel: vlan21: entered promiscuous mode
Nov 22 05:20:08 compute-0 systemd-udevd[52638]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0558] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0561] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0566] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0575] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0590] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0596] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0597] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0598] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0599] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0599] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0600] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0601] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 kernel: vlan23: entered promiscuous mode
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0605] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0610] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0612] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0615] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0618] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0621] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0624] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0626] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0628] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0630] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0633] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0636] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0638] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0642] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 22 05:20:08 compute-0 kernel: vlan20: entered promiscuous mode
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0655] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0658] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0666] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0677] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0686] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0695] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0739] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0740] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0741] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0746] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0750] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0769] device (eth1): Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0773] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0778] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0783] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0788] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0793] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0801] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0806] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0813] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0833] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0846] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0856] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0857] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0863] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0870] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0873] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 05:20:08 compute-0 NetworkManager[49751]: <info>  [1763788808.0879] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.2270] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 sshd-session[52596]: Connection closed by authenticating user root 123.253.22.30 port 57092 [preauth]
Nov 22 05:20:09 compute-0 sudo[52891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkgjyhzdrvmvextbatruydqbiwgepvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788808.9226646-295-262607791993990/AnsiballZ_async_status.py'
Nov 22 05:20:09 compute-0 sudo[52891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.4137] checkpoint[0x55a58f81d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.4140] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 python3.9[52893]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=status _async_dir=/root/.ansible_async
Nov 22 05:20:09 compute-0 sudo[52891]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.7333] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.7347] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.9787] audit: op="networking-control" arg="global-dns-configuration" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.9822] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.9856] audit: op="networking-control" arg="global-dns-configuration" pid=52533 uid=0 result="success"
Nov 22 05:20:09 compute-0 NetworkManager[49751]: <info>  [1763788809.9883] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 05:20:10 compute-0 NetworkManager[49751]: <info>  [1763788810.1219] checkpoint[0x55a58f81da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 22 05:20:10 compute-0 NetworkManager[49751]: <info>  [1763788810.1223] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52533 uid=0 result="success"
Nov 22 05:20:10 compute-0 ansible-async_wrapper.py[52531]: Module complete (52531)
Nov 22 05:20:10 compute-0 ansible-async_wrapper.py[52530]: Done in kid B.
Nov 22 05:20:12 compute-0 sudo[52997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hemuwdptwgqortdmzgqqlcptltinacek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788808.9226646-295-262607791993990/AnsiballZ_async_status.py'
Nov 22 05:20:12 compute-0 sudo[52997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:13 compute-0 python3.9[52999]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=status _async_dir=/root/.ansible_async
Nov 22 05:20:13 compute-0 sudo[52997]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:13 compute-0 sudo[53097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaeogjxcwlfqngvgukjsytisqgunsdby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788808.9226646-295-262607791993990/AnsiballZ_async_status.py'
Nov 22 05:20:13 compute-0 sudo[53097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:13 compute-0 python3.9[53099]: ansible-ansible.legacy.async_status Invoked with jid=j507233999034.52527 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 05:20:13 compute-0 sudo[53097]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:14 compute-0 sudo[53249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kknjjpjqkzwpqyycxvxjbrwhvhdzyegp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788813.8271189-322-191219942198960/AnsiballZ_stat.py'
Nov 22 05:20:14 compute-0 sudo[53249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:14 compute-0 python3.9[53251]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:20:14 compute-0 sudo[53249]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:14 compute-0 sudo[53372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixowwcxlmvhxopghauiknsrkijmvmxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788813.8271189-322-191219942198960/AnsiballZ_copy.py'
Nov 22 05:20:14 compute-0 sudo[53372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:14 compute-0 python3.9[53374]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788813.8271189-322-191219942198960/.source.returncode _original_basename=.0qwnkb24 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:14 compute-0 sudo[53372]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:15 compute-0 sudo[53524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zncgeokrmfmcclwgcehhbappkpymsxec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788815.1091008-338-8598487192105/AnsiballZ_stat.py'
Nov 22 05:20:15 compute-0 sudo[53524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:15 compute-0 python3.9[53526]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:20:15 compute-0 sudo[53524]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:15 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 05:20:15 compute-0 sudo[53649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icbiybnhoqwusibejrufjbhsbngjcpql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788815.1091008-338-8598487192105/AnsiballZ_copy.py'
Nov 22 05:20:15 compute-0 sudo[53649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:16 compute-0 python3.9[53651]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788815.1091008-338-8598487192105/.source.cfg _original_basename=.jitq13se follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:16 compute-0 sudo[53649]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:16 compute-0 sudo[53802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqaxxsuxgaisidnwvcjwbokarxzszad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788816.320788-353-76124052169835/AnsiballZ_systemd.py'
Nov 22 05:20:16 compute-0 sudo[53802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:16 compute-0 python3.9[53804]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:20:16 compute-0 systemd[1]: Reloading Network Manager...
Nov 22 05:20:17 compute-0 NetworkManager[49751]: <info>  [1763788817.0029] audit: op="reload" arg="0" pid=53808 uid=0 result="success"
Nov 22 05:20:17 compute-0 NetworkManager[49751]: <info>  [1763788817.0036] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 22 05:20:17 compute-0 systemd[1]: Reloaded Network Manager.
Nov 22 05:20:17 compute-0 sudo[53802]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:17 compute-0 sshd-session[45760]: Connection closed by 192.168.122.30 port 39580
Nov 22 05:20:17 compute-0 sshd-session[45757]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:20:17 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 22 05:20:17 compute-0 systemd[1]: session-9.scope: Consumed 51.098s CPU time.
Nov 22 05:20:17 compute-0 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Nov 22 05:20:17 compute-0 systemd-logind[798]: Removed session 9.
Nov 22 05:20:23 compute-0 sshd-session[53839]: Accepted publickey for zuul from 192.168.122.30 port 45564 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:20:23 compute-0 systemd-logind[798]: New session 10 of user zuul.
Nov 22 05:20:23 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 22 05:20:23 compute-0 sshd-session[53839]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:20:24 compute-0 python3.9[53992]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:20:25 compute-0 python3.9[54146]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:20:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 05:20:27 compute-0 python3.9[54341]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:20:27 compute-0 sshd-session[53842]: Connection closed by 192.168.122.30 port 45564
Nov 22 05:20:27 compute-0 sshd-session[53839]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:20:27 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 22 05:20:27 compute-0 systemd[1]: session-10.scope: Consumed 2.624s CPU time.
Nov 22 05:20:27 compute-0 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Nov 22 05:20:27 compute-0 systemd-logind[798]: Removed session 10.
Nov 22 05:20:33 compute-0 sshd-session[54369]: Accepted publickey for zuul from 192.168.122.30 port 58994 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:20:33 compute-0 systemd-logind[798]: New session 11 of user zuul.
Nov 22 05:20:33 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 22 05:20:33 compute-0 sshd-session[54369]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:20:34 compute-0 python3.9[54522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:20:35 compute-0 python3.9[54676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:20:36 compute-0 sudo[54831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tulyisjnyrptrquftqokotgypfwzpeln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788835.8361886-40-7005598746472/AnsiballZ_setup.py'
Nov 22 05:20:36 compute-0 sudo[54831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:36 compute-0 python3.9[54833]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:20:36 compute-0 sudo[54831]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:37 compute-0 sudo[54915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbzbhqewmfgdpampbmrvlhfvlsvomjhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788835.8361886-40-7005598746472/AnsiballZ_dnf.py'
Nov 22 05:20:37 compute-0 sudo[54915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:37 compute-0 python3.9[54917]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:20:38 compute-0 sudo[54915]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:39 compute-0 sudo[55069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwlcirhdxlugaenbsthwhiyllendcpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788839.1058557-52-147835145962310/AnsiballZ_setup.py'
Nov 22 05:20:39 compute-0 sudo[55069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:39 compute-0 python3.9[55071]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:20:40 compute-0 sudo[55069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:40 compute-0 sudo[55264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiterjypljbtypgfjtlgqhorwlivkrue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788840.239672-63-136627726430076/AnsiballZ_file.py'
Nov 22 05:20:40 compute-0 sudo[55264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:40 compute-0 python3.9[55266]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:40 compute-0 sudo[55264]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:41 compute-0 sudo[55416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylhnmqfycekvpcmquijdauvcpktksygt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788841.1910784-71-172022924293009/AnsiballZ_command.py'
Nov 22 05:20:41 compute-0 sudo[55416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:41 compute-0 python3.9[55418]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3380072165-merged.mount: Deactivated successfully.
Nov 22 05:20:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1235679625-merged.mount: Deactivated successfully.
Nov 22 05:20:41 compute-0 podman[55419]: 2025-11-22 05:20:41.934892304 +0000 UTC m=+0.046162166 system refresh
Nov 22 05:20:41 compute-0 sudo[55416]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:42 compute-0 sudo[55579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbvicqwuyfjitbxpprgvpqtezcnmfpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788842.1311836-79-179516530359643/AnsiballZ_stat.py'
Nov 22 05:20:42 compute-0 sudo[55579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:42 compute-0 python3.9[55581]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:20:42 compute-0 sudo[55579]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:20:43 compute-0 sudo[55702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullgbjdemkgaghdjejygusbfhtbstojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788842.1311836-79-179516530359643/AnsiballZ_copy.py'
Nov 22 05:20:43 compute-0 sudo[55702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:43 compute-0 python3.9[55704]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788842.1311836-79-179516530359643/.source.json follow=False _original_basename=podman_network_config.j2 checksum=285d677619700b868d4522aa7e8707442ef518c9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:20:43 compute-0 sudo[55702]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:44 compute-0 sudo[55854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqkrdhzhzbkpfrhgotdgmuwbyqwepcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788843.8885467-94-212673733395452/AnsiballZ_stat.py'
Nov 22 05:20:44 compute-0 sudo[55854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:44 compute-0 python3.9[55856]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:20:44 compute-0 sudo[55854]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:44 compute-0 sudo[55977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-honedakcbkkdghspulbekwzhujtijvbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788843.8885467-94-212673733395452/AnsiballZ_copy.py'
Nov 22 05:20:44 compute-0 sudo[55977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:44 compute-0 python3.9[55979]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788843.8885467-94-212673733395452/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5248920f79a1cb67b3ef013f523e4500b06a731f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:20:45 compute-0 sudo[55977]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:45 compute-0 sudo[56129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noastifkrsjdrwbktogncavtjrifxdxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788845.2358737-110-236920510582512/AnsiballZ_ini_file.py'
Nov 22 05:20:45 compute-0 sudo[56129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:45 compute-0 python3.9[56131]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:20:45 compute-0 sudo[56129]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:46 compute-0 sudo[56281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afywcyilyaiuznrpyhbmexmkaeqjykpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788846.1559951-110-110025850103329/AnsiballZ_ini_file.py'
Nov 22 05:20:46 compute-0 sudo[56281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:46 compute-0 python3.9[56283]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:20:46 compute-0 sudo[56281]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:47 compute-0 sudo[56433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgymjyzkirgvuxuwigrtxxdkjcvqnoxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788846.8341208-110-145387667697455/AnsiballZ_ini_file.py'
Nov 22 05:20:47 compute-0 sudo[56433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:47 compute-0 python3.9[56435]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:20:47 compute-0 sudo[56433]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:47 compute-0 sudo[56585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orsimregbvzlmplmlagmzskxiwvkbazl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788847.5107622-110-272805129784872/AnsiballZ_ini_file.py'
Nov 22 05:20:47 compute-0 sudo[56585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:48 compute-0 python3.9[56587]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:20:48 compute-0 sudo[56585]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:48 compute-0 sudo[56737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyccfwmezbamwawceeihqehczdqrrmsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788848.2997835-141-88763784802355/AnsiballZ_dnf.py'
Nov 22 05:20:48 compute-0 sudo[56737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:48 compute-0 python3.9[56739]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:20:50 compute-0 sudo[56737]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:50 compute-0 sudo[56890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbokzcqqoletdemozsbfjdoaummhsqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788850.533025-152-38146534143712/AnsiballZ_setup.py'
Nov 22 05:20:50 compute-0 sudo[56890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:51 compute-0 python3.9[56892]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:20:51 compute-0 sudo[56890]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:51 compute-0 sudo[57044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijuqulewusqvrnazrfxgumrewprpjgog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788851.347679-160-219735777377129/AnsiballZ_stat.py'
Nov 22 05:20:51 compute-0 sudo[57044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:52 compute-0 python3.9[57046]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:20:52 compute-0 sudo[57044]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:52 compute-0 sudo[57196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnsrapogggpgzvltwcamhcpvcigyhrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788852.5318432-169-192127937346636/AnsiballZ_stat.py'
Nov 22 05:20:52 compute-0 sudo[57196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:53 compute-0 python3.9[57198]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:20:53 compute-0 sudo[57196]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:53 compute-0 sudo[57348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfwvtaapexdlypnsfofkqhxohztzhyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788853.318953-179-52938779612211/AnsiballZ_command.py'
Nov 22 05:20:53 compute-0 sudo[57348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:53 compute-0 python3.9[57350]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:20:53 compute-0 sudo[57348]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:54 compute-0 sudo[57501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iygeisomcttqkmjehjfxvzcjvbgqqmmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788854.1384537-189-276356683085863/AnsiballZ_service_facts.py'
Nov 22 05:20:54 compute-0 sudo[57501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:54 compute-0 python3.9[57503]: ansible-service_facts Invoked
Nov 22 05:20:54 compute-0 network[57520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:20:54 compute-0 network[57521]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:20:54 compute-0 network[57522]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:20:58 compute-0 sudo[57501]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:59 compute-0 sudo[57805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbeplbsdvzlilthmvemgrjuaxgeptsoy ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763788858.6960132-204-257256721151983/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763788858.6960132-204-257256721151983/args'
Nov 22 05:20:59 compute-0 sudo[57805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:20:59 compute-0 sudo[57805]: pam_unix(sudo:session): session closed for user root
Nov 22 05:20:59 compute-0 sudo[57972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwherakgitybdyvacjtetbnakuvcmxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788859.4819043-215-234516287421450/AnsiballZ_dnf.py'
Nov 22 05:20:59 compute-0 sudo[57972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:00 compute-0 python3.9[57974]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:21:01 compute-0 sudo[57972]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:02 compute-0 sudo[58125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgdksvqeeabsshhvadtrnjxesnvibzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788861.824637-228-188556465095570/AnsiballZ_package_facts.py'
Nov 22 05:21:02 compute-0 sudo[58125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:02 compute-0 python3.9[58127]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 05:21:02 compute-0 sudo[58125]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:03 compute-0 sudo[58277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euuevknmrzhjttepbexprrlxrycbfgdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788863.289942-238-197488353136691/AnsiballZ_stat.py'
Nov 22 05:21:03 compute-0 sudo[58277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:03 compute-0 python3.9[58279]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:03 compute-0 sudo[58277]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:04 compute-0 sudo[58402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cofmajizkbyfdwcstjwhzwenkkfrefjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788863.289942-238-197488353136691/AnsiballZ_copy.py'
Nov 22 05:21:04 compute-0 sudo[58402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:04 compute-0 python3.9[58404]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788863.289942-238-197488353136691/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:04 compute-0 sudo[58402]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:05 compute-0 sudo[58556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxkgxeenadhxxdclsygpfmmuutyvuym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788864.6128426-253-224061885439696/AnsiballZ_stat.py'
Nov 22 05:21:05 compute-0 sudo[58556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:05 compute-0 python3.9[58558]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:05 compute-0 sudo[58556]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:05 compute-0 sudo[58681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vblopbaamobtmivscdlrdudqpalcqbir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788864.6128426-253-224061885439696/AnsiballZ_copy.py'
Nov 22 05:21:05 compute-0 sudo[58681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:05 compute-0 python3.9[58683]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788864.6128426-253-224061885439696/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:05 compute-0 sudo[58681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:06 compute-0 sudo[58835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ganbrzxseiyrohjcjfyslgnqmjdmsdtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788866.314195-274-30581718435740/AnsiballZ_lineinfile.py'
Nov 22 05:21:06 compute-0 sudo[58835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:06 compute-0 python3.9[58837]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:07 compute-0 sudo[58835]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:07 compute-0 sudo[58989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdbpplqvxfgsdzqmnxgykqwpnzopaglo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788867.5374682-289-130560352568613/AnsiballZ_setup.py'
Nov 22 05:21:07 compute-0 sudo[58989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:08 compute-0 python3.9[58991]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:21:08 compute-0 sudo[58989]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:08 compute-0 sudo[59073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvevralflmkovjucqiqkcywvozzwpfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788867.5374682-289-130560352568613/AnsiballZ_systemd.py'
Nov 22 05:21:08 compute-0 sudo[59073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:09 compute-0 python3.9[59075]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:09 compute-0 sudo[59073]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:10 compute-0 sudo[59227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fziipqfbrauuxmrghplbjrkmzklrnprj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788869.805452-305-257970136161318/AnsiballZ_setup.py'
Nov 22 05:21:10 compute-0 sudo[59227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:10 compute-0 python3.9[59229]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:21:10 compute-0 sudo[59227]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:10 compute-0 sudo[59311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgesyeksodiefhxutlxmiytyvqoybnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788869.805452-305-257970136161318/AnsiballZ_systemd.py'
Nov 22 05:21:10 compute-0 sudo[59311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:11 compute-0 python3.9[59313]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:21:11 compute-0 chronyd[781]: chronyd exiting
Nov 22 05:21:11 compute-0 systemd[1]: Stopping NTP client/server...
Nov 22 05:21:11 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 22 05:21:11 compute-0 systemd[1]: Stopped NTP client/server.
Nov 22 05:21:11 compute-0 systemd[1]: Starting NTP client/server...
Nov 22 05:21:11 compute-0 chronyd[59321]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 05:21:11 compute-0 chronyd[59321]: Frequency -25.829 +/- 0.171 ppm read from /var/lib/chrony/drift
Nov 22 05:21:11 compute-0 chronyd[59321]: Loaded seccomp filter (level 2)
Nov 22 05:21:11 compute-0 systemd[1]: Started NTP client/server.
Nov 22 05:21:11 compute-0 sudo[59311]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:11 compute-0 sshd-session[54372]: Connection closed by 192.168.122.30 port 58994
Nov 22 05:21:11 compute-0 sshd-session[54369]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:21:11 compute-0 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Nov 22 05:21:11 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 22 05:21:11 compute-0 systemd[1]: session-11.scope: Consumed 27.360s CPU time.
Nov 22 05:21:11 compute-0 systemd-logind[798]: Removed session 11.
Nov 22 05:21:17 compute-0 sshd-session[59347]: Accepted publickey for zuul from 192.168.122.30 port 51990 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:21:17 compute-0 systemd-logind[798]: New session 12 of user zuul.
Nov 22 05:21:17 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 22 05:21:17 compute-0 sshd-session[59347]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:21:17 compute-0 sudo[59500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxhgbplfzzhwysfefesiyorzzbhbnry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788877.5151782-22-99231920691854/AnsiballZ_file.py'
Nov 22 05:21:17 compute-0 sudo[59500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:18 compute-0 python3.9[59502]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:18 compute-0 sudo[59500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:18 compute-0 sudo[59652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrpgesmbcaukqixoscajeoobvjfzqsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788878.3566797-34-188032673110942/AnsiballZ_stat.py'
Nov 22 05:21:18 compute-0 sudo[59652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:19 compute-0 python3.9[59654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:19 compute-0 sudo[59652]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:19 compute-0 sudo[59775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihgttmyzveujfdnwmyzrpjgzyhpoxjug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788878.3566797-34-188032673110942/AnsiballZ_copy.py'
Nov 22 05:21:19 compute-0 sudo[59775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:19 compute-0 python3.9[59777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788878.3566797-34-188032673110942/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:19 compute-0 sudo[59775]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:20 compute-0 sshd-session[59350]: Connection closed by 192.168.122.30 port 51990
Nov 22 05:21:20 compute-0 sshd-session[59347]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:21:20 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 22 05:21:20 compute-0 systemd[1]: session-12.scope: Consumed 1.767s CPU time.
Nov 22 05:21:20 compute-0 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Nov 22 05:21:20 compute-0 systemd-logind[798]: Removed session 12.
Nov 22 05:21:25 compute-0 sshd-session[59802]: Accepted publickey for zuul from 192.168.122.30 port 51996 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:21:25 compute-0 systemd-logind[798]: New session 13 of user zuul.
Nov 22 05:21:25 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 22 05:21:25 compute-0 sshd-session[59802]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:21:26 compute-0 python3.9[59955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:21:27 compute-0 sudo[60109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxjipbqajmzrrqyobuzdjyqczakguhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788887.253111-33-251296386894668/AnsiballZ_file.py'
Nov 22 05:21:27 compute-0 sudo[60109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:27 compute-0 python3.9[60111]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:27 compute-0 sudo[60109]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:28 compute-0 sudo[60284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhnzrvasglamdznzehhondllstmzyvdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788888.1421494-41-236861592226798/AnsiballZ_stat.py'
Nov 22 05:21:28 compute-0 sudo[60284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:28 compute-0 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:28 compute-0 sudo[60284]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:29 compute-0 sudo[60407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmmrtaholsinrjewqtqcwjngqizbmkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788888.1421494-41-236861592226798/AnsiballZ_copy.py'
Nov 22 05:21:29 compute-0 sudo[60407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:29 compute-0 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763788888.1421494-41-236861592226798/.source.json _original_basename=.m_sh_aqj follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:29 compute-0 sudo[60407]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:30 compute-0 sudo[60559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yenafzuxiqacdiucttiykcapnonadusy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788889.9389427-64-70300357265889/AnsiballZ_stat.py'
Nov 22 05:21:30 compute-0 sudo[60559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:30 compute-0 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:30 compute-0 sudo[60559]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:30 compute-0 sudo[60682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykpeynaugvksljfovtvnlyirjxzojbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788889.9389427-64-70300357265889/AnsiballZ_copy.py'
Nov 22 05:21:30 compute-0 sudo[60682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:31 compute-0 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788889.9389427-64-70300357265889/.source _original_basename=.cpmx41u5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:31 compute-0 sudo[60682]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:31 compute-0 sudo[60834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbejdnefnkqdmomgkofrzcnfmpfkbkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788891.249591-80-262450509084542/AnsiballZ_file.py'
Nov 22 05:21:31 compute-0 sudo[60834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:31 compute-0 python3.9[60836]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:21:31 compute-0 sudo[60834]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:32 compute-0 sudo[60986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shserwkqpbmhncgojdrwkuhnaeahuxju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788891.8855565-88-249004461455160/AnsiballZ_stat.py'
Nov 22 05:21:32 compute-0 sudo[60986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:32 compute-0 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:32 compute-0 sudo[60986]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:32 compute-0 sudo[61109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citikxzbkukmwkkmkfswjamqvglesmtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788891.8855565-88-249004461455160/AnsiballZ_copy.py'
Nov 22 05:21:32 compute-0 sudo[61109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:32 compute-0 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788891.8855565-88-249004461455160/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:21:32 compute-0 sudo[61109]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:33 compute-0 sudo[61261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmzorycifpgghqzwbcfifvapsqnrlvgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788893.0867453-88-114917941509815/AnsiballZ_stat.py'
Nov 22 05:21:33 compute-0 sudo[61261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:33 compute-0 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:33 compute-0 sudo[61261]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:33 compute-0 sudo[61384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahhdcsspoanhzhwxkqadaerfmcyifyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788893.0867453-88-114917941509815/AnsiballZ_copy.py'
Nov 22 05:21:33 compute-0 sudo[61384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:34 compute-0 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763788893.0867453-88-114917941509815/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:21:34 compute-0 sudo[61384]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:34 compute-0 sudo[61536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-besidbedbjnsvripnjmjqshsfnmwiige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788894.2557018-117-162125533845395/AnsiballZ_file.py'
Nov 22 05:21:34 compute-0 sudo[61536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:34 compute-0 python3.9[61538]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:34 compute-0 sudo[61536]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:35 compute-0 sudo[61688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feyeyigbhrwsnpqiixcdopynnsiwzxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788894.8753633-125-253932159872018/AnsiballZ_stat.py'
Nov 22 05:21:35 compute-0 sudo[61688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:35 compute-0 python3.9[61690]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:35 compute-0 sudo[61688]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:35 compute-0 sudo[61811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwbsuycwwaysmsnogvxessasuevkbhbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788894.8753633-125-253932159872018/AnsiballZ_copy.py'
Nov 22 05:21:35 compute-0 sudo[61811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:35 compute-0 python3.9[61813]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788894.8753633-125-253932159872018/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:35 compute-0 sudo[61811]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:36 compute-0 sudo[61963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntbicncnmuqxiwvjuvivgjvblscwnkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788896.0682867-140-65509616201285/AnsiballZ_stat.py'
Nov 22 05:21:36 compute-0 sudo[61963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:36 compute-0 python3.9[61965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:36 compute-0 sudo[61963]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:36 compute-0 sudo[62086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwrnobsgqsliwvcsklprcnaehfasdwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788896.0682867-140-65509616201285/AnsiballZ_copy.py'
Nov 22 05:21:36 compute-0 sudo[62086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:37 compute-0 python3.9[62088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788896.0682867-140-65509616201285/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:37 compute-0 sudo[62086]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:37 compute-0 sudo[62238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsywqeqstfsaydukbionivurqhgjisf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788897.3267498-155-224993886428453/AnsiballZ_systemd.py'
Nov 22 05:21:37 compute-0 sudo[62238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:38 compute-0 python3.9[62240]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:38 compute-0 systemd[1]: Reloading.
Nov 22 05:21:38 compute-0 systemd-rc-local-generator[62263]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:38 compute-0 systemd-sysv-generator[62270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:38 compute-0 systemd[1]: Reloading.
Nov 22 05:21:38 compute-0 systemd-sysv-generator[62308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:38 compute-0 systemd-rc-local-generator[62304]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:38 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 22 05:21:38 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 22 05:21:38 compute-0 sudo[62238]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:39 compute-0 sudo[62463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goyyoxevilrobzcioyhvyijvsrbqpjum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788898.872449-163-74482882916275/AnsiballZ_stat.py'
Nov 22 05:21:39 compute-0 sudo[62463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:39 compute-0 python3.9[62465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:39 compute-0 sudo[62463]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:39 compute-0 sudo[62586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfkljkddubsjzkkciejeoaivglfzbmmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788898.872449-163-74482882916275/AnsiballZ_copy.py'
Nov 22 05:21:39 compute-0 sudo[62586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:39 compute-0 python3.9[62588]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788898.872449-163-74482882916275/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:39 compute-0 sudo[62586]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:40 compute-0 sudo[62738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbuwpijekseclouyocrsfajyirqftuzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788900.1310968-178-194616479260024/AnsiballZ_stat.py'
Nov 22 05:21:40 compute-0 sudo[62738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:40 compute-0 python3.9[62740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:40 compute-0 sudo[62738]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:41 compute-0 sudo[62861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcvgfzfwsetpefpnkrvzqsuhnzeynczz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788900.1310968-178-194616479260024/AnsiballZ_copy.py'
Nov 22 05:21:41 compute-0 sudo[62861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:41 compute-0 python3.9[62863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788900.1310968-178-194616479260024/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:41 compute-0 sudo[62861]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:41 compute-0 sudo[63013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkofgdxnxhejrnxprpylanshyquowiid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788901.4277816-193-223629282367197/AnsiballZ_systemd.py'
Nov 22 05:21:41 compute-0 sudo[63013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:42 compute-0 python3.9[63015]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:42 compute-0 systemd[1]: Reloading.
Nov 22 05:21:42 compute-0 systemd-sysv-generator[63043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:42 compute-0 systemd-rc-local-generator[63040]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:42 compute-0 systemd[1]: Reloading.
Nov 22 05:21:42 compute-0 systemd-rc-local-generator[63079]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:42 compute-0 systemd-sysv-generator[63083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:42 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 05:21:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 05:21:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 05:21:42 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 05:21:42 compute-0 sudo[63013]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:43 compute-0 python3.9[63242]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:21:43 compute-0 network[63259]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:21:43 compute-0 network[63260]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:21:43 compute-0 network[63261]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:21:46 compute-0 sudo[63521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdmyqbnsvumnemhejhrvmqoptiyucnfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788906.3119218-209-13650084410962/AnsiballZ_systemd.py'
Nov 22 05:21:46 compute-0 sudo[63521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:46 compute-0 python3.9[63523]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:46 compute-0 systemd[1]: Reloading.
Nov 22 05:21:47 compute-0 systemd-sysv-generator[63554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:47 compute-0 systemd-rc-local-generator[63550]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:47 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 22 05:21:47 compute-0 iptables.init[63563]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 22 05:21:47 compute-0 iptables.init[63563]: iptables: Flushing firewall rules: [  OK  ]
Nov 22 05:21:47 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 22 05:21:47 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 22 05:21:47 compute-0 sudo[63521]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:48 compute-0 sudo[63758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtwthbonjsdooryetdwveyuyjznhofv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788907.7424185-209-203887780039772/AnsiballZ_systemd.py'
Nov 22 05:21:48 compute-0 sudo[63758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:48 compute-0 python3.9[63760]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:48 compute-0 sudo[63758]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:48 compute-0 sudo[63912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okoohwyyssyobrzuimvvvdikqvtyjqpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788908.5938237-225-149129056676012/AnsiballZ_systemd.py'
Nov 22 05:21:48 compute-0 sudo[63912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:49 compute-0 python3.9[63914]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:21:49 compute-0 systemd[1]: Reloading.
Nov 22 05:21:49 compute-0 systemd-rc-local-generator[63934]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:21:49 compute-0 systemd-sysv-generator[63942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:21:49 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 22 05:21:49 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 22 05:21:49 compute-0 sudo[63912]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:50 compute-0 sudo[64104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjtkrtzpcgqaahyrgzjgfnxlrrfwhbkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788909.8645802-233-14020147512438/AnsiballZ_command.py'
Nov 22 05:21:50 compute-0 sudo[64104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:50 compute-0 python3.9[64106]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:21:51 compute-0 sudo[64104]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:51 compute-0 sudo[64257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frspibyfpxqopqqqiglrfspweboawovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788911.4621804-247-16553724278889/AnsiballZ_stat.py'
Nov 22 05:21:51 compute-0 sudo[64257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:51 compute-0 python3.9[64259]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:51 compute-0 sudo[64257]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:52 compute-0 sudo[64382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrwepvrraamqlocmwzdtaanwppwanli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788911.4621804-247-16553724278889/AnsiballZ_copy.py'
Nov 22 05:21:52 compute-0 sudo[64382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:52 compute-0 python3.9[64384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788911.4621804-247-16553724278889/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:52 compute-0 sudo[64382]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:53 compute-0 sudo[64535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkdlgqcdesuxnkanwtlffgzrbtiatnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788912.6889222-262-14468868899818/AnsiballZ_systemd.py'
Nov 22 05:21:53 compute-0 sudo[64535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:53 compute-0 python3.9[64537]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:21:53 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 22 05:21:53 compute-0 sshd[1006]: Received SIGHUP; restarting.
Nov 22 05:21:53 compute-0 sshd[1006]: Server listening on 0.0.0.0 port 22.
Nov 22 05:21:53 compute-0 sshd[1006]: Server listening on :: port 22.
Nov 22 05:21:53 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 22 05:21:53 compute-0 sudo[64535]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:53 compute-0 sudo[64691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igsbqqoqxckjsnkwevigisuqfaqtvxxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788913.6566687-270-203335640645685/AnsiballZ_file.py'
Nov 22 05:21:53 compute-0 sudo[64691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:54 compute-0 python3.9[64693]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:54 compute-0 sudo[64691]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:54 compute-0 sudo[64843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-salalocrahqagmgttbhgpvedykgnycvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788914.3083594-278-91404843847875/AnsiballZ_stat.py'
Nov 22 05:21:54 compute-0 sudo[64843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:54 compute-0 python3.9[64845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:54 compute-0 sudo[64843]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:55 compute-0 sudo[64966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-styltqrkuswpitsiyhmolawfsavxjaax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788914.3083594-278-91404843847875/AnsiballZ_copy.py'
Nov 22 05:21:55 compute-0 sudo[64966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:55 compute-0 python3.9[64968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788914.3083594-278-91404843847875/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:55 compute-0 sudo[64966]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:56 compute-0 sudo[65118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxqflsuhdgnijsitgzvstjdgsjnrclnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788915.780994-296-218874886199516/AnsiballZ_timezone.py'
Nov 22 05:21:56 compute-0 sudo[65118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:56 compute-0 python3.9[65120]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 05:21:56 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 05:21:56 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 05:21:56 compute-0 sudo[65118]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:57 compute-0 sudo[65274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlpusqyuuvabujsryhxthutejuohiht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788916.9001591-305-261649069092547/AnsiballZ_file.py'
Nov 22 05:21:57 compute-0 sudo[65274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:57 compute-0 python3.9[65276]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:57 compute-0 sudo[65274]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:58 compute-0 sudo[65426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlhpgliahzcouiznoiveawduccqrlttw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788917.7073703-313-103320882982555/AnsiballZ_stat.py'
Nov 22 05:21:58 compute-0 sudo[65426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:58 compute-0 python3.9[65428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:58 compute-0 sudo[65426]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:58 compute-0 sudo[65549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amozdvqqkzzigcdfjbvzztzaulrbzwdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788917.7073703-313-103320882982555/AnsiballZ_copy.py'
Nov 22 05:21:58 compute-0 sudo[65549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:58 compute-0 python3.9[65551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788917.7073703-313-103320882982555/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:21:58 compute-0 sudo[65549]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:59 compute-0 sudo[65701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnaitvkbjszqzvpfymjcloutambgrefe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788919.1458921-328-116604979989003/AnsiballZ_stat.py'
Nov 22 05:21:59 compute-0 sudo[65701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:21:59 compute-0 python3.9[65703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:21:59 compute-0 sudo[65701]: pam_unix(sudo:session): session closed for user root
Nov 22 05:21:59 compute-0 sudo[65824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymjcfdhveetgrhnyjnosccrponicvtye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788919.1458921-328-116604979989003/AnsiballZ_copy.py'
Nov 22 05:21:59 compute-0 sudo[65824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:00 compute-0 python3.9[65826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763788919.1458921-328-116604979989003/.source.yaml _original_basename=.f55fmubb follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:00 compute-0 sudo[65824]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:00 compute-0 sudo[65976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utoxkihdxcthyvvgxmajlrusuvlxeuko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788920.3817027-343-5806126297593/AnsiballZ_stat.py'
Nov 22 05:22:00 compute-0 sudo[65976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:00 compute-0 python3.9[65978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:00 compute-0 sudo[65976]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:01 compute-0 sudo[66099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkqmsxnxpvweudjwluvtvvxdwfkbbqhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788920.3817027-343-5806126297593/AnsiballZ_copy.py'
Nov 22 05:22:01 compute-0 sudo[66099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:01 compute-0 python3.9[66101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788920.3817027-343-5806126297593/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:01 compute-0 sudo[66099]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:01 compute-0 sudo[66251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzwjbnhtmetqanofutrogxeryycfren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788921.6762316-358-141187486647515/AnsiballZ_command.py'
Nov 22 05:22:01 compute-0 sudo[66251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:02 compute-0 python3.9[66253]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:02 compute-0 sudo[66251]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:02 compute-0 sudo[66404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwcazaubiyfsnaxgcgrfmtaepadobfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788922.3232737-366-201887427015593/AnsiballZ_command.py'
Nov 22 05:22:02 compute-0 sudo[66404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:02 compute-0 python3.9[66406]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:02 compute-0 sudo[66404]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:03 compute-0 sudo[66557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njvbohtdomnfqleaelnmfocupvrngtkg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763788922.9986274-374-246999960105672/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 05:22:03 compute-0 sudo[66557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:03 compute-0 python3[66559]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 05:22:03 compute-0 sudo[66557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:04 compute-0 sudo[66709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbkynrkyhzvvjgvosnotevshjbbscalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788923.8731472-382-84108130920977/AnsiballZ_stat.py'
Nov 22 05:22:04 compute-0 sudo[66709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:04 compute-0 python3.9[66711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:04 compute-0 sudo[66709]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:04 compute-0 sudo[66832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgssmfiwtgwfjalxfkwpwrvzivavvpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788923.8731472-382-84108130920977/AnsiballZ_copy.py'
Nov 22 05:22:04 compute-0 sudo[66832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:04 compute-0 python3.9[66834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788923.8731472-382-84108130920977/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:04 compute-0 sudo[66832]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:05 compute-0 sudo[66984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtegejbcgsfatqysvistpefxdjtwqtls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788925.1576605-397-211062127273011/AnsiballZ_stat.py'
Nov 22 05:22:05 compute-0 sudo[66984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:05 compute-0 python3.9[66986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:05 compute-0 sudo[66984]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:06 compute-0 sudo[67107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljglyrekuxudirkpocnvnxqasmmtvhon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788925.1576605-397-211062127273011/AnsiballZ_copy.py'
Nov 22 05:22:06 compute-0 sudo[67107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:06 compute-0 python3.9[67109]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788925.1576605-397-211062127273011/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:06 compute-0 sudo[67107]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:06 compute-0 sudo[67259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhgbnlqrbmfktlleywhrsypztqqpxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788926.4552882-412-3183530403795/AnsiballZ_stat.py'
Nov 22 05:22:06 compute-0 sudo[67259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:06 compute-0 python3.9[67261]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:07 compute-0 sudo[67259]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:07 compute-0 sudo[67382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cacytymlwlbdsbosjviraqbamuzrrdot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788926.4552882-412-3183530403795/AnsiballZ_copy.py'
Nov 22 05:22:07 compute-0 sudo[67382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:07 compute-0 python3.9[67384]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788926.4552882-412-3183530403795/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:07 compute-0 sudo[67382]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:08 compute-0 sudo[67534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meaacghnfwpzvxzmwyrmfntudmxolete ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788927.768072-427-214302495441314/AnsiballZ_stat.py'
Nov 22 05:22:08 compute-0 sudo[67534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:08 compute-0 python3.9[67536]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:08 compute-0 sudo[67534]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:08 compute-0 sudo[67657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxteuqahxtdlnzgzxnpbstvbjhtafyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788927.768072-427-214302495441314/AnsiballZ_copy.py'
Nov 22 05:22:08 compute-0 sudo[67657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:08 compute-0 python3.9[67659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788927.768072-427-214302495441314/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:08 compute-0 sudo[67657]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:09 compute-0 sudo[67809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgkrmjwtjceemicqjsbpserguwyxkeqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788928.9330702-442-176112762379502/AnsiballZ_stat.py'
Nov 22 05:22:09 compute-0 sudo[67809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:09 compute-0 python3.9[67811]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:22:09 compute-0 sudo[67809]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:09 compute-0 sudo[67932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcnnysefzutqjgsrypgmiiotumbgybyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788928.9330702-442-176112762379502/AnsiballZ_copy.py'
Nov 22 05:22:09 compute-0 sudo[67932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:10 compute-0 python3.9[67934]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763788928.9330702-442-176112762379502/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:10 compute-0 sudo[67932]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:10 compute-0 sudo[68084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqzggyrytfzcqqaedchpxmulepmxqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788930.3386068-457-121303696784253/AnsiballZ_file.py'
Nov 22 05:22:10 compute-0 sudo[68084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:10 compute-0 python3.9[68086]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:10 compute-0 sudo[68084]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:11 compute-0 sudo[68236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaezxdrcyhfusuueykybqeudwjceenjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788931.1114333-465-198337578471652/AnsiballZ_command.py'
Nov 22 05:22:11 compute-0 sudo[68236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:11 compute-0 python3.9[68238]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:11 compute-0 sudo[68236]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:12 compute-0 sudo[68395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbdikuqlfbbafrhdsxieqmmeteaikuay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788931.864405-473-257716842666430/AnsiballZ_blockinfile.py'
Nov 22 05:22:12 compute-0 sudo[68395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:12 compute-0 python3.9[68397]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:12 compute-0 sudo[68395]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:13 compute-0 sudo[68548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfldjtrjzeeuqkxdovbsktvblgchwdrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788932.785791-482-39130314291097/AnsiballZ_file.py'
Nov 22 05:22:13 compute-0 sudo[68548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:13 compute-0 python3.9[68550]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:13 compute-0 sudo[68548]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:13 compute-0 sudo[68700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gunhmbdgqvqjdhalfdptkflyfmcyngrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788933.4784098-482-40188693973629/AnsiballZ_file.py'
Nov 22 05:22:13 compute-0 sudo[68700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:13 compute-0 python3.9[68702]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:14 compute-0 sudo[68700]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:14 compute-0 sudo[68852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cutttpgcxiclruovkprmvbexojjmdwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788934.1557014-497-170999650871512/AnsiballZ_mount.py'
Nov 22 05:22:14 compute-0 sudo[68852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:14 compute-0 python3.9[68854]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 05:22:14 compute-0 sudo[68852]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:15 compute-0 sudo[69005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkuqnwuclvwxsrvijedlduifvadrglxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788935.01358-497-269107837534683/AnsiballZ_mount.py'
Nov 22 05:22:15 compute-0 sudo[69005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:15 compute-0 python3.9[69007]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 05:22:15 compute-0 sudo[69005]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:15 compute-0 sshd-session[59805]: Connection closed by 192.168.122.30 port 51996
Nov 22 05:22:15 compute-0 sshd-session[59802]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:22:15 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 22 05:22:15 compute-0 systemd[1]: session-13.scope: Consumed 37.962s CPU time.
Nov 22 05:22:15 compute-0 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Nov 22 05:22:15 compute-0 systemd-logind[798]: Removed session 13.
Nov 22 05:22:21 compute-0 sshd-session[69033]: Accepted publickey for zuul from 192.168.122.30 port 53666 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:22:21 compute-0 systemd-logind[798]: New session 14 of user zuul.
Nov 22 05:22:21 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 22 05:22:21 compute-0 sshd-session[69033]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:22:22 compute-0 sudo[69186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjrczzxzubqyrkltzzsitowfmsqesbzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788941.7604299-16-278062903926205/AnsiballZ_tempfile.py'
Nov 22 05:22:22 compute-0 sudo[69186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:22 compute-0 python3.9[69188]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 05:22:22 compute-0 sudo[69186]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:23 compute-0 sudo[69338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqbifetwozerzhwmgdmpjgnrbforrog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788942.5765276-28-168242377009061/AnsiballZ_stat.py'
Nov 22 05:22:23 compute-0 sudo[69338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:23 compute-0 python3.9[69340]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:22:23 compute-0 sudo[69338]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:24 compute-0 sudo[69490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xexrlsvkyygjrilgwjtrgpjhtmfrkvsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788943.442518-38-81432963209153/AnsiballZ_setup.py'
Nov 22 05:22:24 compute-0 sudo[69490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:24 compute-0 python3.9[69492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:22:24 compute-0 sudo[69490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:25 compute-0 sudo[69642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xibxpziupeohtwduxxtmdwnqvpfyvngn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788944.6284897-47-119770275271978/AnsiballZ_blockinfile.py'
Nov 22 05:22:25 compute-0 sudo[69642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:25 compute-0 python3.9[69644]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCit8LB4kN4s+ZkWj80X2HgMN9rqM53DLp82j+iZT/+7rzt4hXyml/QRwnRtRuhiMmFC20M8IvUEbNi1zKVVkcoHO/p5QkECCjKHEn1MqPis5D+QZQrGTeDLDkMrhuE8Pw5y61lJ5qm3EI6GZDRrUGmuVCEeJh9jpUQQ+8LlojrWycpo0svG9DIb8mUq1I1nCK8CeVIHkhCTc+F7OhSzzKJQHl5RrVX/K9kH0ak//kwjPdbyIHnB8JaTqci/DJPmcm4GxKKRNVErCrY3DBZNFCBt8iwjWu4MrqLv3iFLufwFed9mnoqLvVJGR8kDpmCdEKpNs8k6fls3xtt9j7NHMXOf4Xio2n+e3iS0eOEjoIKs/UMbDlHH7hqO/lx7Yv3YLgQtef4crGkOWxGILX2eOs5/1d6lgIzp04lzLy2oPlyJGb8bCwGvRMwojZNUO91mQkoO5vDssg6huJ8lBEWfxr8rao78xnahRc+m7sCEtI5n1VTqXAor62Z67+PFALoyi0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3PVl+DJPXhnIIicPnX2nTw410SH80rkcpaBLgvWfvA
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITMmQ16+iCw0/ZG0kuxaDVundusiLycQm50s7cZraLscE8RlmDWnFcRh+jIhL0lLGEyvuocxAlG/xRmMEF3zf8=
                                             create=True mode=0644 path=/tmp/ansible.21tunqgh state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:25 compute-0 sudo[69642]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:25 compute-0 sudo[69794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqejtabkxxncqtqlzqtocgfghlishpjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788945.4988503-55-69167999913505/AnsiballZ_command.py'
Nov 22 05:22:25 compute-0 sudo[69794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:26 compute-0 python3.9[69796]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.21tunqgh' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:26 compute-0 sudo[69794]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:26 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 05:22:26 compute-0 sudo[69950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewykxfueonlmtrvqrfxpqolscbwlsxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788946.3155248-63-179636382723923/AnsiballZ_file.py'
Nov 22 05:22:26 compute-0 sudo[69950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:26 compute-0 python3.9[69952]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.21tunqgh state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:26 compute-0 sudo[69950]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:27 compute-0 sshd-session[69036]: Connection closed by 192.168.122.30 port 53666
Nov 22 05:22:27 compute-0 sshd-session[69033]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:22:27 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 22 05:22:27 compute-0 systemd[1]: session-14.scope: Consumed 3.626s CPU time.
Nov 22 05:22:27 compute-0 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Nov 22 05:22:27 compute-0 systemd-logind[798]: Removed session 14.
Nov 22 05:22:29 compute-0 sshd-session[69977]: Invalid user solana from 80.94.92.166 port 38262
Nov 22 05:22:29 compute-0 sshd-session[69977]: Connection closed by invalid user solana 80.94.92.166 port 38262 [preauth]
Nov 22 05:22:32 compute-0 sshd-session[69979]: Accepted publickey for zuul from 192.168.122.30 port 38212 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:22:32 compute-0 systemd-logind[798]: New session 15 of user zuul.
Nov 22 05:22:32 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 22 05:22:32 compute-0 sshd-session[69979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:22:33 compute-0 python3.9[70132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:22:34 compute-0 sudo[70286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqtumbzibfmimkangreuprddbnskkigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788953.7965615-32-89098837651545/AnsiballZ_systemd.py'
Nov 22 05:22:34 compute-0 sudo[70286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:34 compute-0 python3.9[70288]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 05:22:34 compute-0 sudo[70286]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:35 compute-0 sudo[70440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvfqxentiwyqzuiuorhvxjdagyrlchco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788955.0650253-40-261547626214324/AnsiballZ_systemd.py'
Nov 22 05:22:35 compute-0 sudo[70440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:35 compute-0 python3.9[70442]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:22:35 compute-0 sudo[70440]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:36 compute-0 sudo[70593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrnfnwerxkfxwtgmkicpwqpoufqpagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788956.0573292-49-4074104958211/AnsiballZ_command.py'
Nov 22 05:22:36 compute-0 sudo[70593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:36 compute-0 python3.9[70595]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:36 compute-0 sudo[70593]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:37 compute-0 sudo[70746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umrbdqtbktncxfulnepiflrmvzqmfggz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788956.9339125-57-212450209743816/AnsiballZ_stat.py'
Nov 22 05:22:37 compute-0 sudo[70746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:37 compute-0 python3.9[70748]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:22:37 compute-0 sudo[70746]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:38 compute-0 sudo[70900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsdcfnejefluacvbofcnuyyvnjqnmftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788957.8078306-65-267328503661962/AnsiballZ_command.py'
Nov 22 05:22:38 compute-0 sudo[70900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:38 compute-0 python3.9[70902]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:38 compute-0 sudo[70900]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:38 compute-0 sudo[71055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uliwssjtxqbnqkmshmbwqbpwksjwwurl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788958.4906018-73-232321133962673/AnsiballZ_file.py'
Nov 22 05:22:38 compute-0 sudo[71055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:39 compute-0 python3.9[71057]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:22:39 compute-0 sudo[71055]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:39 compute-0 sshd-session[69982]: Connection closed by 192.168.122.30 port 38212
Nov 22 05:22:39 compute-0 sshd-session[69979]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:22:39 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 22 05:22:39 compute-0 systemd[1]: session-15.scope: Consumed 5.045s CPU time.
Nov 22 05:22:39 compute-0 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Nov 22 05:22:39 compute-0 systemd-logind[798]: Removed session 15.
Nov 22 05:22:45 compute-0 sshd-session[71082]: Accepted publickey for zuul from 192.168.122.30 port 43188 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:22:45 compute-0 systemd-logind[798]: New session 16 of user zuul.
Nov 22 05:22:45 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 22 05:22:45 compute-0 sshd-session[71082]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:22:46 compute-0 python3.9[71235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:22:47 compute-0 sudo[71389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uycegnyyzqmjjrcpjgymwvqrfpruqnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788967.0996454-34-247975898575199/AnsiballZ_setup.py'
Nov 22 05:22:47 compute-0 sudo[71389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:47 compute-0 python3.9[71391]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:22:48 compute-0 sudo[71389]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:48 compute-0 sudo[71473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnobxngrmahdfagixgiiwkopgfmbkick ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763788967.0996454-34-247975898575199/AnsiballZ_dnf.py'
Nov 22 05:22:48 compute-0 sudo[71473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:22:48 compute-0 python3.9[71475]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 05:22:49 compute-0 sudo[71473]: pam_unix(sudo:session): session closed for user root
Nov 22 05:22:50 compute-0 python3.9[71626]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:22:52 compute-0 python3.9[71777]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:22:53 compute-0 python3.9[71927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:22:53 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:22:54 compute-0 python3.9[72078]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:22:54 compute-0 sshd-session[71085]: Connection closed by 192.168.122.30 port 43188
Nov 22 05:22:54 compute-0 sshd-session[71082]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:22:54 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 22 05:22:54 compute-0 systemd[1]: session-16.scope: Consumed 6.421s CPU time.
Nov 22 05:22:54 compute-0 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Nov 22 05:22:54 compute-0 systemd-logind[798]: Removed session 16.
Nov 22 05:23:01 compute-0 sshd-session[72103]: Accepted publickey for zuul from 38.102.83.69 port 60252 ssh2: RSA SHA256:723CLJgh9jzg+4Vfbb+tCqWKZy25P9e6Oul69vFbpik
Nov 22 05:23:01 compute-0 systemd-logind[798]: New session 17 of user zuul.
Nov 22 05:23:01 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 22 05:23:01 compute-0 sshd-session[72103]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:23:01 compute-0 sudo[72179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsbxxifphvsurivlntqbarvmaqvpykg ; /usr/bin/python3'
Nov 22 05:23:01 compute-0 sudo[72179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:02 compute-0 useradd[72183]: new group: name=ceph-admin, GID=42478
Nov 22 05:23:02 compute-0 useradd[72183]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 22 05:23:02 compute-0 sudo[72179]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:02 compute-0 sudo[72265]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwzjmcrukahktqsuoqtvtjqzojkhritr ; /usr/bin/python3'
Nov 22 05:23:02 compute-0 sudo[72265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:02 compute-0 sudo[72265]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:03 compute-0 sudo[72338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjowinfteusuxtoihxcsdaotsienzbl ; /usr/bin/python3'
Nov 22 05:23:03 compute-0 sudo[72338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:03 compute-0 sudo[72338]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:03 compute-0 sudo[72388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvyyrlsbsqvncjnqwtjzfigqhpuhowk ; /usr/bin/python3'
Nov 22 05:23:03 compute-0 sudo[72388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:03 compute-0 sudo[72388]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:03 compute-0 sudo[72414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbhgcdxpygvknsajxioxqnrwozqphfg ; /usr/bin/python3'
Nov 22 05:23:03 compute-0 sudo[72414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:03 compute-0 sudo[72414]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:04 compute-0 sudo[72440]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypdcxcctftmgemzbkqdnjcrpqirxinwf ; /usr/bin/python3'
Nov 22 05:23:04 compute-0 sudo[72440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:04 compute-0 sudo[72440]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:04 compute-0 sudo[72466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlckzsqryrktsyvmsyzgvlgsezwcopyb ; /usr/bin/python3'
Nov 22 05:23:04 compute-0 sudo[72466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:04 compute-0 sudo[72466]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:05 compute-0 sudo[72544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twudicypbrslqcxfdaehkotsmsqxuzvr ; /usr/bin/python3'
Nov 22 05:23:05 compute-0 sudo[72544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:05 compute-0 sudo[72544]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:05 compute-0 sudo[72617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqlnjqrbamrndklzvznoxpguoxpazeis ; /usr/bin/python3'
Nov 22 05:23:05 compute-0 sudo[72617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:05 compute-0 sudo[72617]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:05 compute-0 sudo[72719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrrcdxvszjmygcxvpwngliijfwapuhg ; /usr/bin/python3'
Nov 22 05:23:05 compute-0 sudo[72719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:06 compute-0 sudo[72719]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:06 compute-0 sudo[72792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdzghfznwzdqtwkljrlpwoeykoqsbzhv ; /usr/bin/python3'
Nov 22 05:23:06 compute-0 sudo[72792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:06 compute-0 sudo[72792]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:06 compute-0 sudo[72842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtxyneiczxknxqawacfyjqyxqikjncep ; /usr/bin/python3'
Nov 22 05:23:06 compute-0 sudo[72842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:07 compute-0 python3[72844]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:23:08 compute-0 sudo[72842]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:08 compute-0 sudo[72937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyrifeucgohjmxkgpyxvreezttbztzrk ; /usr/bin/python3'
Nov 22 05:23:08 compute-0 sudo[72937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:08 compute-0 python3[72939]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 05:23:10 compute-0 sudo[72937]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:10 compute-0 sudo[72964]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfmuzmqcenjlysnquxscimuxbiveeve ; /usr/bin/python3'
Nov 22 05:23:10 compute-0 sudo[72964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:10 compute-0 python3[72966]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:10 compute-0 sudo[72964]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:10 compute-0 sudo[72990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnclgzhzdoaimuvesechlyesusndnfoc ; /usr/bin/python3'
Nov 22 05:23:10 compute-0 sudo[72990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:10 compute-0 python3[72992]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:10 compute-0 kernel: loop: module loaded
Nov 22 05:23:10 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 22 05:23:10 compute-0 sudo[72990]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:10 compute-0 sudo[73025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aazjbuixrnwsukvukvxuwfghqfuhwbug ; /usr/bin/python3'
Nov 22 05:23:10 compute-0 sudo[73025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:11 compute-0 python3[73027]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:11 compute-0 lvm[73030]: PV /dev/loop3 not used.
Nov 22 05:23:11 compute-0 lvm[73039]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 05:23:11 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 22 05:23:11 compute-0 sudo[73025]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:11 compute-0 lvm[73041]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 22 05:23:11 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 22 05:23:11 compute-0 sudo[73117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqizqvibtaoataqwxueielyxtwptoxc ; /usr/bin/python3'
Nov 22 05:23:11 compute-0 sudo[73117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:11 compute-0 python3[73119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:23:11 compute-0 sudo[73117]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:11 compute-0 sudo[73190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppltjmolvtbysaoveaqscyhukqnxxwkd ; /usr/bin/python3'
Nov 22 05:23:11 compute-0 sudo[73190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:12 compute-0 python3[73192]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763788991.4528005-36104-101923152292173/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:12 compute-0 sudo[73190]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:12 compute-0 sudo[73240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqybrjcvtggbbcxbsrznunuzekieemg ; /usr/bin/python3'
Nov 22 05:23:12 compute-0 sudo[73240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:12 compute-0 python3[73242]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:23:12 compute-0 systemd[1]: Reloading.
Nov 22 05:23:13 compute-0 systemd-rc-local-generator[73272]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:13 compute-0 systemd-sysv-generator[73275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:13 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 05:23:13 compute-0 bash[73282]: /dev/loop3: [64513]:4194941 (/var/lib/ceph-osd-0.img)
Nov 22 05:23:13 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 05:23:13 compute-0 sudo[73240]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:13 compute-0 lvm[73283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 05:23:13 compute-0 lvm[73283]: VG ceph_vg0 finished
Nov 22 05:23:13 compute-0 sudo[73307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-navcrkfiqcvnjvvwaxrawupcehbulybv ; /usr/bin/python3'
Nov 22 05:23:13 compute-0 sudo[73307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:13 compute-0 python3[73309]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 05:23:14 compute-0 sudo[73307]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:15 compute-0 sudo[73334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadzhrzzrfieztmtaevidztflvfkzdts ; /usr/bin/python3'
Nov 22 05:23:15 compute-0 sudo[73334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:15 compute-0 python3[73336]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:15 compute-0 sudo[73334]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:15 compute-0 sudo[73360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmysfwpiirwjdwlocfueybufpgxtsvzu ; /usr/bin/python3'
Nov 22 05:23:15 compute-0 sudo[73360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:15 compute-0 python3[73362]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:15 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 22 05:23:15 compute-0 sudo[73360]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:15 compute-0 sudo[73392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfdmlynjpegbbsukmpbnqmueqqmvrrrv ; /usr/bin/python3'
Nov 22 05:23:15 compute-0 sudo[73392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:15 compute-0 python3[73394]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:15 compute-0 lvm[73397]: PV /dev/loop4 not used.
Nov 22 05:23:15 compute-0 lvm[73399]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:23:16 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 22 05:23:16 compute-0 lvm[73410]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:23:16 compute-0 lvm[73410]: VG ceph_vg1 finished
Nov 22 05:23:16 compute-0 lvm[73408]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 22 05:23:16 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 22 05:23:16 compute-0 sudo[73392]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:16 compute-0 sudo[73486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmnitwjptzetwnwdawvyfqnoqektimd ; /usr/bin/python3'
Nov 22 05:23:16 compute-0 sudo[73486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:16 compute-0 python3[73488]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:23:16 compute-0 sudo[73486]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:16 compute-0 sudo[73559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqohgrutskyzfvkkeeezpkxkbjxvkpjm ; /usr/bin/python3'
Nov 22 05:23:16 compute-0 sudo[73559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:17 compute-0 python3[73561]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763788996.2851915-36131-152525462381567/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:17 compute-0 sudo[73559]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:17 compute-0 sudo[73609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jptwjucptjxawphajhmqrqucsqcyivcq ; /usr/bin/python3'
Nov 22 05:23:17 compute-0 sudo[73609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:17 compute-0 python3[73611]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:23:17 compute-0 systemd[1]: Reloading.
Nov 22 05:23:17 compute-0 systemd-rc-local-generator[73638]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:17 compute-0 systemd-sysv-generator[73643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:17 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 05:23:17 compute-0 bash[73652]: /dev/loop4: [64513]:4328008 (/var/lib/ceph-osd-1.img)
Nov 22 05:23:17 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 05:23:17 compute-0 lvm[73653]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:23:17 compute-0 lvm[73653]: VG ceph_vg1 finished
Nov 22 05:23:17 compute-0 sudo[73609]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:18 compute-0 sudo[73677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frhttvplifjfkifzvifpeimjdvjnfagm ; /usr/bin/python3'
Nov 22 05:23:18 compute-0 sudo[73677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:18 compute-0 python3[73679]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 05:23:19 compute-0 sudo[73677]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:19 compute-0 sudo[73704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlvpthhluwwkhbehfafibjiebrcgkbhg ; /usr/bin/python3'
Nov 22 05:23:19 compute-0 sudo[73704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:19 compute-0 python3[73706]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:19 compute-0 sudo[73704]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:20 compute-0 sudo[73730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtdncqfqtsteniiinbjvaxjcshvnyxd ; /usr/bin/python3'
Nov 22 05:23:20 compute-0 sudo[73730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:20 compute-0 chronyd[59321]: Selected source 23.133.168.247 (pool.ntp.org)
Nov 22 05:23:20 compute-0 python3[73732]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:20 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 22 05:23:20 compute-0 sudo[73730]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:20 compute-0 sudo[73762]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnsicskjyxaxkuefegtvcqewgpgydugt ; /usr/bin/python3'
Nov 22 05:23:20 compute-0 sudo[73762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:20 compute-0 python3[73764]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:20 compute-0 lvm[73767]: PV /dev/loop5 not used.
Nov 22 05:23:20 compute-0 lvm[73769]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:23:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 22 05:23:20 compute-0 lvm[73773]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 22 05:23:20 compute-0 lvm[73779]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:23:20 compute-0 lvm[73779]: VG ceph_vg2 finished
Nov 22 05:23:20 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 22 05:23:20 compute-0 sudo[73762]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:21 compute-0 sudo[73857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jikoagczfwiaazopphprdswqqetisqqr ; /usr/bin/python3'
Nov 22 05:23:21 compute-0 sudo[73857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:21 compute-0 python3[73859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:23:21 compute-0 sudo[73857]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:21 compute-0 sudo[73930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvktykfobmzxazponyyqksoiuwzjnpkn ; /usr/bin/python3'
Nov 22 05:23:21 compute-0 sudo[73930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:21 compute-0 python3[73932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789001.0991242-36158-95671880934911/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:21 compute-0 sudo[73930]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:22 compute-0 sudo[73980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guxbmbyhiyuqqdbkheprmghowmatlsat ; /usr/bin/python3'
Nov 22 05:23:22 compute-0 sudo[73980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:22 compute-0 python3[73982]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:23:22 compute-0 systemd[1]: Reloading.
Nov 22 05:23:22 compute-0 systemd-rc-local-generator[74007]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:22 compute-0 systemd-sysv-generator[74014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:22 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 05:23:22 compute-0 bash[74022]: /dev/loop5: [64513]:4328009 (/var/lib/ceph-osd-2.img)
Nov 22 05:23:22 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 05:23:22 compute-0 sudo[73980]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:22 compute-0 lvm[74023]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:23:22 compute-0 lvm[74023]: VG ceph_vg2 finished
Nov 22 05:23:24 compute-0 python3[74047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:23:26 compute-0 sudo[74138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmlqsdlriplmvqspugnomzesszlltop ; /usr/bin/python3'
Nov 22 05:23:26 compute-0 sudo[74138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:27 compute-0 python3[74140]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 05:23:28 compute-0 groupadd[74146]: group added to /etc/group: name=cephadm, GID=992
Nov 22 05:23:28 compute-0 groupadd[74146]: group added to /etc/gshadow: name=cephadm
Nov 22 05:23:28 compute-0 groupadd[74146]: new group: name=cephadm, GID=992
Nov 22 05:23:28 compute-0 useradd[74153]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 22 05:23:28 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:23:28 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:23:29 compute-0 sudo[74138]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:29 compute-0 sudo[74249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyyixqusffkzjombrzzsetmzxthedthw ; /usr/bin/python3'
Nov 22 05:23:29 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:23:29 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:23:29 compute-0 sudo[74249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:29 compute-0 systemd[1]: run-r7cff77c6cee64418a4e62cfbf5f00f5d.service: Deactivated successfully.
Nov 22 05:23:29 compute-0 python3[74252]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:29 compute-0 sudo[74249]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:29 compute-0 sudo[74278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skjurzwodkykhleoiagqwcmiivddlclq ; /usr/bin/python3'
Nov 22 05:23:29 compute-0 sudo[74278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:29 compute-0 python3[74280]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:30 compute-0 sudo[74278]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:30 compute-0 sudo[74341]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsfzmlwynvyiooecokbsnwylqeykzkha ; /usr/bin/python3'
Nov 22 05:23:30 compute-0 sudo[74341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:30 compute-0 python3[74343]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:30 compute-0 sudo[74341]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:30 compute-0 sudo[74367]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqaqbudpqicqjpfskmwvjncfckugyns ; /usr/bin/python3'
Nov 22 05:23:30 compute-0 sudo[74367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:31 compute-0 python3[74369]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:31 compute-0 sudo[74367]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:31 compute-0 sudo[74445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwjocjrshdigpcmudvrdadugndelphm ; /usr/bin/python3'
Nov 22 05:23:31 compute-0 sudo[74445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:31 compute-0 python3[74447]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:23:31 compute-0 sudo[74445]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:32 compute-0 sudo[74518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzdhdjmemwlozzxshntrtefdewtkjrxs ; /usr/bin/python3'
Nov 22 05:23:32 compute-0 sudo[74518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:32 compute-0 python3[74520]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789011.6581194-36305-40649663552791/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:32 compute-0 sudo[74518]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:33 compute-0 sudo[74620]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeybnfihdepsezofgtmuluybrfshwmwt ; /usr/bin/python3'
Nov 22 05:23:33 compute-0 sudo[74620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:33 compute-0 python3[74622]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:23:33 compute-0 sudo[74620]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:33 compute-0 sudo[74693]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qygtedvzmjsilgdwcthxjghzaqpjinzk ; /usr/bin/python3'
Nov 22 05:23:33 compute-0 sudo[74693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:33 compute-0 python3[74695]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789012.9001427-36323-152131388737324/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:23:33 compute-0 sudo[74693]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:33 compute-0 sudo[74743]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enypwgatncxfgctmpzrxknsdevsytqew ; /usr/bin/python3'
Nov 22 05:23:33 compute-0 sudo[74743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:34 compute-0 python3[74745]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:34 compute-0 sudo[74743]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:34 compute-0 sudo[74771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctakcfnzmyxlviszztqwirwrhniwcihv ; /usr/bin/python3'
Nov 22 05:23:34 compute-0 sudo[74771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:34 compute-0 python3[74773]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:34 compute-0 sudo[74771]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:34 compute-0 sudo[74799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twbfbrrziquifrmflslljwsyvsatpagw ; /usr/bin/python3'
Nov 22 05:23:34 compute-0 sudo[74799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:34 compute-0 python3[74801]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:23:34 compute-0 sudo[74799]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:34 compute-0 sudo[74827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opsjfymzqtfhvzxaoyqqdpovjoibafqq ; /usr/bin/python3'
Nov 22 05:23:34 compute-0 sudo[74827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:23:35 compute-0 python3[74829]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:35 compute-0 sshd-session[74845]: Accepted publickey for ceph-admin from 192.168.122.100 port 51618 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:23:35 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 05:23:35 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 05:23:35 compute-0 systemd-logind[798]: New session 18 of user ceph-admin.
Nov 22 05:23:35 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 05:23:35 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 22 05:23:35 compute-0 systemd[74849]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:23:35 compute-0 systemd[74849]: Queued start job for default target Main User Target.
Nov 22 05:23:35 compute-0 systemd[74849]: Created slice User Application Slice.
Nov 22 05:23:35 compute-0 systemd[74849]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 05:23:35 compute-0 systemd[74849]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 05:23:35 compute-0 systemd[74849]: Reached target Paths.
Nov 22 05:23:35 compute-0 systemd[74849]: Reached target Timers.
Nov 22 05:23:35 compute-0 systemd[74849]: Starting D-Bus User Message Bus Socket...
Nov 22 05:23:35 compute-0 systemd[74849]: Starting Create User's Volatile Files and Directories...
Nov 22 05:23:35 compute-0 systemd[74849]: Listening on D-Bus User Message Bus Socket.
Nov 22 05:23:35 compute-0 systemd[74849]: Reached target Sockets.
Nov 22 05:23:35 compute-0 systemd[74849]: Finished Create User's Volatile Files and Directories.
Nov 22 05:23:35 compute-0 systemd[74849]: Reached target Basic System.
Nov 22 05:23:35 compute-0 systemd[74849]: Reached target Main User Target.
Nov 22 05:23:35 compute-0 systemd[74849]: Startup finished in 141ms.
Nov 22 05:23:35 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 22 05:23:35 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 22 05:23:35 compute-0 sshd-session[74845]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:23:35 compute-0 sudo[74866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 22 05:23:35 compute-0 sudo[74866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:23:35 compute-0 sudo[74866]: pam_unix(sudo:session): session closed for user root
Nov 22 05:23:35 compute-0 sshd-session[74865]: Received disconnect from 192.168.122.100 port 51618:11: disconnected by user
Nov 22 05:23:35 compute-0 sshd-session[74865]: Disconnected from user ceph-admin 192.168.122.100 port 51618
Nov 22 05:23:35 compute-0 sshd-session[74845]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 22 05:23:35 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 22 05:23:35 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Nov 22 05:23:35 compute-0 systemd-logind[798]: Removed session 18.
Nov 22 05:23:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4184782893-merged.mount: Deactivated successfully.
Nov 22 05:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4184782893-lower\x2dmapped.mount: Deactivated successfully.
Nov 22 05:23:46 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 22 05:23:46 compute-0 systemd[74849]: Activating special unit Exit the Session...
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped target Main User Target.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped target Basic System.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped target Paths.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped target Sockets.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped target Timers.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 05:23:46 compute-0 systemd[74849]: Closed D-Bus User Message Bus Socket.
Nov 22 05:23:46 compute-0 systemd[74849]: Stopped Create User's Volatile Files and Directories.
Nov 22 05:23:46 compute-0 systemd[74849]: Removed slice User Application Slice.
Nov 22 05:23:46 compute-0 systemd[74849]: Reached target Shutdown.
Nov 22 05:23:46 compute-0 systemd[74849]: Finished Exit the Session.
Nov 22 05:23:46 compute-0 systemd[74849]: Reached target Exit the Session.
Nov 22 05:23:46 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 22 05:23:46 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 22 05:23:46 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 22 05:23:46 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 22 05:23:46 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 22 05:23:46 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 22 05:23:46 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 22 05:23:49 compute-0 podman[74903]: 2025-11-22 05:23:49.174211911 +0000 UTC m=+13.253894788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.269672366 +0000 UTC m=+0.061100633 container create 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:23:49 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 22 05:23:49 compute-0 systemd[1]: Started libpod-conmon-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope.
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.247865777 +0000 UTC m=+0.039294064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.380348734 +0000 UTC m=+0.171777011 container init 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.389942467 +0000 UTC m=+0.181370714 container start 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.395855455 +0000 UTC m=+0.187283732 container attach 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:49 compute-0 elastic_jennings[74982]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 05:23:49 compute-0 systemd[1]: libpod-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope: Deactivated successfully.
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.695575391 +0000 UTC m=+0.487003728 container died 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac60b2cce4f7281438b1cb37ef41d79f83436a951f4fc1757824bf4a0a9fac7d-merged.mount: Deactivated successfully.
Nov 22 05:23:49 compute-0 podman[74966]: 2025-11-22 05:23:49.761760957 +0000 UTC m=+0.553189214 container remove 3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7 (image=quay.io/ceph/ceph:v18, name=elastic_jennings, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:49 compute-0 systemd[1]: libpod-conmon-3accab2c231c46a3f807444a0d465e6c5eaf62ebaf7318f68a5b2d042a6d8ca7.scope: Deactivated successfully.
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.818853723 +0000 UTC m=+0.039105239 container create 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:49 compute-0 systemd[1]: Started libpod-conmon-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope.
Nov 22 05:23:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.882259325 +0000 UTC m=+0.102510871 container init 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.887104015 +0000 UTC m=+0.107355581 container start 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:49 compute-0 jolly_elion[75017]: 167 167
Nov 22 05:23:49 compute-0 systemd[1]: libpod-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope: Deactivated successfully.
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.891047478 +0000 UTC m=+0.111299034 container attach 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.892147708 +0000 UTC m=+0.112399224 container died 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.800586008 +0000 UTC m=+0.020837554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:49 compute-0 podman[75001]: 2025-11-22 05:23:49.934158323 +0000 UTC m=+0.154409839 container remove 35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030 (image=quay.io/ceph/ceph:v18, name=jolly_elion, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:23:49 compute-0 systemd[1]: libpod-conmon-35b95ecc960cbe8fb4b0b1e185d1eaa5cc4ef9b1fe7ff7720fc26cfc9757e030.scope: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.003821532 +0000 UTC m=+0.049733571 container create 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:50 compute-0 systemd[1]: Started libpod-conmon-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope.
Nov 22 05:23:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.060492766 +0000 UTC m=+0.106404825 container init 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.067079161 +0000 UTC m=+0.112991210 container start 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.07117342 +0000 UTC m=+0.117085499 container attach 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:49.979662301 +0000 UTC m=+0.025574420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:50 compute-0 brave_stonebraker[75049]: AQDmSCFpDHZwBRAA+iksjtMVNlBTWmFMf6R2mw==
Nov 22 05:23:50 compute-0 systemd[1]: libpod-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.095529746 +0000 UTC m=+0.141441785 container died 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:23:50 compute-0 podman[75032]: 2025-11-22 05:23:50.143827218 +0000 UTC m=+0.189739267 container remove 364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828 (image=quay.io/ceph/ceph:v18, name=brave_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:23:50 compute-0 systemd[1]: libpod-conmon-364feb00b9e89c5997fc38703fa797083babfb906fabbd5f1c3164f7b2aaa828.scope: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.231750702 +0000 UTC m=+0.057328522 container create 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:23:50 compute-0 systemd[1]: Started libpod-conmon-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope.
Nov 22 05:23:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.203242975 +0000 UTC m=+0.028820855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.303418944 +0000 UTC m=+0.128996834 container init 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.308880169 +0000 UTC m=+0.134457989 container start 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.319125861 +0000 UTC m=+0.144703751 container attach 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:50 compute-0 youthful_sammet[75084]: AQDmSCFpN9rQFBAA81ixeuAoaRYrVhuKeGII8A==
Nov 22 05:23:50 compute-0 systemd[1]: libpod-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.353608277 +0000 UTC m=+0.179186067 container died 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4975a723cf2792aed3c5269f25176d3440d67c553b666721134a2716388d04b2-merged.mount: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75068]: 2025-11-22 05:23:50.402029361 +0000 UTC m=+0.227607151 container remove 9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958 (image=quay.io/ceph/ceph:v18, name=youthful_sammet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:50 compute-0 systemd[1]: libpod-conmon-9d8908ace24e07bef7ddca8481646ddb9d8dd0ac08003110b2ec745af0087958.scope: Deactivated successfully.
Nov 22 05:23:50 compute-0 podman[75103]: 2025-11-22 05:23:50.475906443 +0000 UTC m=+0.047975415 container create f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:23:50 compute-0 systemd[1]: Started libpod-conmon-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope.
Nov 22 05:23:50 compute-0 podman[75103]: 2025-11-22 05:23:50.455859931 +0000 UTC m=+0.027928943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:51 compute-0 podman[75103]: 2025-11-22 05:23:51.369618704 +0000 UTC m=+0.941687696 container init f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:51 compute-0 podman[75103]: 2025-11-22 05:23:51.380896724 +0000 UTC m=+0.952965696 container start f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:23:51 compute-0 tender_mestorf[75120]: AQDnSCFpPvbiFxAAgpu0aaumWZdNcU2o+Pdsag==
Nov 22 05:23:51 compute-0 systemd[1]: libpod-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope: Deactivated successfully.
Nov 22 05:23:51 compute-0 podman[75103]: 2025-11-22 05:23:51.404560842 +0000 UTC m=+0.976629854 container attach f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:23:51 compute-0 podman[75103]: 2025-11-22 05:23:51.4052389 +0000 UTC m=+0.977307872 container died f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7efa087d475ec5b56ca15c9b4eafd20701a847556e5ac7be2eb5922a28ab5bc-merged.mount: Deactivated successfully.
Nov 22 05:23:51 compute-0 podman[75103]: 2025-11-22 05:23:51.499720598 +0000 UTC m=+1.071789600 container remove f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a (image=quay.io/ceph/ceph:v18, name=tender_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:51 compute-0 systemd[1]: libpod-conmon-f3a9b0614ec18db86772811b050f8e5c22d83d13dc2da91c6d5e2e310d25237a.scope: Deactivated successfully.
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.602817705 +0000 UTC m=+0.069925167 container create d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:23:51 compute-0 systemd[1]: Started libpod-conmon-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope.
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.574351449 +0000 UTC m=+0.041458951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/694d98e09c395c7b0e4a76ae0b1214c3eb1387f1b37929090f0c1bfa966aa9a4/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.701181955 +0000 UTC m=+0.168289487 container init d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.710934554 +0000 UTC m=+0.178042026 container start d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.715448774 +0000 UTC m=+0.182556236 container attach d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:23:51 compute-0 unruffled_morse[75157]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 22 05:23:51 compute-0 unruffled_morse[75157]: setting min_mon_release = pacific
Nov 22 05:23:51 compute-0 unruffled_morse[75157]: /usr/bin/monmaptool: set fsid to 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:51 compute-0 unruffled_morse[75157]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 22 05:23:51 compute-0 systemd[1]: libpod-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope: Deactivated successfully.
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.761184728 +0000 UTC m=+0.228292160 container died d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:51 compute-0 podman[75141]: 2025-11-22 05:23:51.930881072 +0000 UTC m=+0.397988534 container remove d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60 (image=quay.io/ceph/ceph:v18, name=unruffled_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:51 compute-0 systemd[1]: libpod-conmon-d16b5a5e1e5fc768b91537defb2138b927cfc588f3fc3929d5128c890d0a2c60.scope: Deactivated successfully.
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.001218629 +0000 UTC m=+0.043420413 container create ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:23:52 compute-0 systemd[1]: Started libpod-conmon-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope.
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:51.981703042 +0000 UTC m=+0.023904846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.1225569 +0000 UTC m=+0.164758774 container init ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.131856717 +0000 UTC m=+0.174058541 container start ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.135789242 +0000 UTC m=+0.177991106 container attach ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:23:52 compute-0 systemd[1]: libpod-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope: Deactivated successfully.
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.225004449 +0000 UTC m=+0.267206263 container died ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-eda09f677016b69b052d2ef430161621b6a89e5d5ed6ac06e14ab71445e9c253-merged.mount: Deactivated successfully.
Nov 22 05:23:52 compute-0 podman[75176]: 2025-11-22 05:23:52.268113124 +0000 UTC m=+0.310314928 container remove ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505 (image=quay.io/ceph/ceph:v18, name=romantic_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:52 compute-0 systemd[1]: libpod-conmon-ed9ff56fd7838c9ab7cbd04405592485d93c950823e3da8eb237677f17633505.scope: Deactivated successfully.
Nov 22 05:23:52 compute-0 systemd[1]: Reloading.
Nov 22 05:23:52 compute-0 systemd-sysv-generator[75262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:52 compute-0 systemd-rc-local-generator[75259]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:52 compute-0 systemd[1]: Reloading.
Nov 22 05:23:52 compute-0 systemd-sysv-generator[75302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:52 compute-0 systemd-rc-local-generator[75298]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:52 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 22 05:23:52 compute-0 systemd[1]: Reloading.
Nov 22 05:23:52 compute-0 systemd-sysv-generator[75340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:52 compute-0 systemd-rc-local-generator[75335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:53 compute-0 systemd[1]: Reached target Ceph cluster 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:53 compute-0 systemd[1]: Reloading.
Nov 22 05:23:53 compute-0 systemd-rc-local-generator[75368]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:53 compute-0 systemd-sysv-generator[75372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:53 compute-0 systemd[1]: Reloading.
Nov 22 05:23:53 compute-0 systemd-rc-local-generator[75411]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:53 compute-0 systemd-sysv-generator[75418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:53 compute-0 systemd[1]: Created slice Slice /system/ceph-13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:53 compute-0 systemd[1]: Reached target System Time Set.
Nov 22 05:23:53 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 22 05:23:53 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:53 compute-0 podman[75472]: 2025-11-22 05:23:53.884148418 +0000 UTC m=+0.056845549 container create 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:53 compute-0 podman[75472]: 2025-11-22 05:23:53.85555938 +0000 UTC m=+0.028256571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:53 compute-0 podman[75472]: 2025-11-22 05:23:53.970209133 +0000 UTC m=+0.142906254 container init 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:53 compute-0 podman[75472]: 2025-11-22 05:23:53.977291531 +0000 UTC m=+0.149988642 container start 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:23:53 compute-0 bash[75472]: 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5
Nov 22 05:23:53 compute-0 systemd[1]: Started Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:54 compute-0 ceph-mon[75491]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: pidfile_write: ignore empty --pid-file
Nov 22 05:23:54 compute-0 ceph-mon[75491]: load: jerasure load: lrc 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Git sha 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: DB SUMMARY
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: DB Session ID:  3Q880ZQ6T64W7W0R1Q28
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                                     Options.env: 0x559676215c40
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                                Options.info_log: 0x55967801ee80
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                                 Options.wal_dir: 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                    Options.write_buffer_manager: 0x55967802eb40
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                               Options.row_cache: None
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                              Options.wal_filter: None
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.wal_compression: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.max_background_jobs: 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Compression algorithms supported:
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kZSTD supported: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:           Options.merge_operator: 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:        Options.compaction_filter: None
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55967801ea80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5596780171f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.compression: NoCompression
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.num_levels: 7
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4e45ab2-4273-47c3-96b1-648e5316c944
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034027276, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034029681, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "3Q880ZQ6T64W7W0R1Q28", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789034029791, "job": 1, "event": "recovery_finished"}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559678040e00
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: DB pointer 0x5596780ca000
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:23:54 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.15 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596780171f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 05:23:54 compute-0 ceph-mon[75491]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@-1(???) e0 preinit fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 22 05:23:54 compute-0 ceph-mon[75491]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 05:23:54 compute-0 ceph-mon[75491]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-22T05:23:52.169278Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.063129439 +0000 UTC m=+0.048734025 container create 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).mds e1 new map
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mkfs 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:54 compute-0 systemd[1]: Started libpod-conmon-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope.
Nov 22 05:23:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.042103051 +0000 UTC m=+0.027707617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52df07aef776543e564715bab648e810da8b4dcf07aeb6b711a81f88fdaf69e1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.1633895 +0000 UTC m=+0.148994076 container init 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.170888769 +0000 UTC m=+0.156493325 container start 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.174433414 +0000 UTC m=+0.160038010 container attach 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:23:54 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 05:23:54 compute-0 ceph-mon[75491]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866553743' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:   cluster:
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     id:     13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     health: HEALTH_OK
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:  
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:   services:
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     mon: 1 daemons, quorum compute-0 (age 0.490728s)
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     mgr: no daemons active
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     osd: 0 osds: 0 up, 0 in
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:  
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:   data:
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     pools:   0 pools, 0 pgs
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     objects: 0 objects, 0 B
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     usage:   0 B used, 0 B / 0 B avail
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:     pgs:     
Nov 22 05:23:54 compute-0 funny_goldberg[75545]:  
Nov 22 05:23:54 compute-0 systemd[1]: libpod-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope: Deactivated successfully.
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.568995746 +0000 UTC m=+0.554600342 container died 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:54 compute-0 podman[75492]: 2025-11-22 05:23:54.631277179 +0000 UTC m=+0.616881765 container remove 815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f (image=quay.io/ceph/ceph:v18, name=funny_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:23:54 compute-0 systemd[1]: libpod-conmon-815dfb24f0ac374d3225113e2e0eb8286e1c060bc8df3d7e583407a20a8c024f.scope: Deactivated successfully.
Nov 22 05:23:54 compute-0 podman[75583]: 2025-11-22 05:23:54.723236321 +0000 UTC m=+0.058288009 container create 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:23:54 compute-0 systemd[1]: Started libpod-conmon-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope.
Nov 22 05:23:54 compute-0 podman[75583]: 2025-11-22 05:23:54.696055259 +0000 UTC m=+0.031106967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:54 compute-0 podman[75583]: 2025-11-22 05:23:54.823556663 +0000 UTC m=+0.158608321 container init 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:23:54 compute-0 podman[75583]: 2025-11-22 05:23:54.837890793 +0000 UTC m=+0.172942491 container start 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:54 compute-0 podman[75583]: 2025-11-22 05:23:54.842550297 +0000 UTC m=+0.177601975 container attach 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:23:55 compute-0 ceph-mon[75491]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:55 compute-0 ceph-mon[75491]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 05:23:55 compute-0 ceph-mon[75491]: fsmap 
Nov 22 05:23:55 compute-0 ceph-mon[75491]: osdmap e1: 0 total, 0 up, 0 in
Nov 22 05:23:55 compute-0 ceph-mon[75491]: mgrmap e1: no daemons active
Nov 22 05:23:55 compute-0 ceph-mon[75491]: from='client.? 192.168.122.100:0/1866553743' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 05:23:55 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 05:23:55 compute-0 ceph-mon[75491]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:23:55 compute-0 ceph-mon[75491]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 05:23:55 compute-0 charming_matsumoto[75599]: 
Nov 22 05:23:55 compute-0 charming_matsumoto[75599]: [global]
Nov 22 05:23:55 compute-0 charming_matsumoto[75599]:         fsid = 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:55 compute-0 charming_matsumoto[75599]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 22 05:23:55 compute-0 charming_matsumoto[75599]:         osd_crush_chooseleaf_type = 0
Nov 22 05:23:55 compute-0 systemd[1]: libpod-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope: Deactivated successfully.
Nov 22 05:23:55 compute-0 podman[75583]: 2025-11-22 05:23:55.253036043 +0000 UTC m=+0.588087741 container died 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4157ad28ff5e9d7fb8282b187bca48c9c9b539fbbb7921d8a8f18c38ac9de93e-merged.mount: Deactivated successfully.
Nov 22 05:23:55 compute-0 podman[75583]: 2025-11-22 05:23:55.32754535 +0000 UTC m=+0.662597048 container remove 5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf (image=quay.io/ceph/ceph:v18, name=charming_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:55 compute-0 systemd[1]: libpod-conmon-5c107dcab2d33138256f93f26d3f9ebd3c3dd34354c8abcf978a678c7a6f33bf.scope: Deactivated successfully.
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.427027181 +0000 UTC m=+0.069476205 container create 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:55 compute-0 systemd[1]: Started libpod-conmon-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope.
Nov 22 05:23:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.396663265 +0000 UTC m=+0.039112369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.510065065 +0000 UTC m=+0.152514189 container init 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.519577267 +0000 UTC m=+0.162026321 container start 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.524000255 +0000 UTC m=+0.166449309 container attach 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:55 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:23:55 compute-0 ceph-mon[75491]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129884442' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:23:55 compute-0 systemd[1]: libpod-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope: Deactivated successfully.
Nov 22 05:23:55 compute-0 podman[75639]: 2025-11-22 05:23:55.960936333 +0000 UTC m=+0.603385387 container died 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d50b2b1895fc5eb969a465b73a87bef10c73d24cb68b817d25ad1b0eb94de752-merged.mount: Deactivated successfully.
Nov 22 05:23:56 compute-0 podman[75639]: 2025-11-22 05:23:56.006610285 +0000 UTC m=+0.649059299 container remove 075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c (image=quay.io/ceph/ceph:v18, name=elastic_curie, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:56 compute-0 systemd[1]: libpod-conmon-075dbbc63c2d69287acd4c63f6b34b0b48f98f5be36a250999cd3878df78a29c.scope: Deactivated successfully.
Nov 22 05:23:56 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:23:56 compute-0 ceph-mon[75491]: from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:23:56 compute-0 ceph-mon[75491]: from='client.? 192.168.122.100:0/159893929' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 05:23:56 compute-0 ceph-mon[75491]: from='client.? 192.168.122.100:0/4129884442' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:23:56 compute-0 ceph-mon[75491]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 05:23:56 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 05:23:56 compute-0 ceph-mon[75491]: mon.compute-0@0(leader) e1 shutdown
Nov 22 05:23:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75487]: 2025-11-22T05:23:56.210+0000 7f75ef9a3640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 05:23:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75487]: 2025-11-22T05:23:56.210+0000 7f75ef9a3640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 05:23:56 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 05:23:56 compute-0 ceph-mon[75491]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 05:23:56 compute-0 podman[75720]: 2025-11-22 05:23:56.302899529 +0000 UTC m=+0.124096605 container died 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d21d2ca03f686a7a1210b11019271493c8df076e891ca2eceb04dcca608a9c-merged.mount: Deactivated successfully.
Nov 22 05:23:56 compute-0 podman[75720]: 2025-11-22 05:23:56.345308485 +0000 UTC m=+0.166505551 container remove 4eb2cd9740bbb7e78b37d019742eac3293a4cbb3156c12ebb078cd1b08cce8b5 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:23:56 compute-0 bash[75720]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0
Nov 22 05:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 05:23:56 compute-0 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0.service: Deactivated successfully.
Nov 22 05:23:56 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:56 compute-0 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0.service: Consumed 1.094s CPU time.
Nov 22 05:23:56 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:23:56 compute-0 podman[75821]: 2025-11-22 05:23:56.831314705 +0000 UTC m=+0.069314211 container create d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d77d680da33cc4cb888a6c3583cc78239731eabadba0abb697ffda11c24e159a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:56 compute-0 podman[75821]: 2025-11-22 05:23:56.898233001 +0000 UTC m=+0.136232517 container init d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:56 compute-0 podman[75821]: 2025-11-22 05:23:56.805803868 +0000 UTC m=+0.043803414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:56 compute-0 podman[75821]: 2025-11-22 05:23:56.903582433 +0000 UTC m=+0.141581919 container start d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:23:56 compute-0 bash[75821]: d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107
Nov 22 05:23:56 compute-0 systemd[1]: Started Ceph mon.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:56 compute-0 ceph-mon[75840]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:23:56 compute-0 ceph-mon[75840]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: pidfile_write: ignore empty --pid-file
Nov 22 05:23:56 compute-0 ceph-mon[75840]: load: jerasure load: lrc 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Git sha 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: DB SUMMARY
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: DB Session ID:  OCOOLGAJEIQ903CUBBA6
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55672 ; 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                                     Options.env: 0x55fdf8ffbc40
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                                Options.info_log: 0x55fdfafd1040
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                                 Options.wal_dir: 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                    Options.write_buffer_manager: 0x55fdfafe0b40
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                               Options.row_cache: None
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                              Options.wal_filter: None
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.wal_compression: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.max_background_jobs: 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Compression algorithms supported:
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kZSTD supported: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:           Options.merge_operator: 
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:        Options.compaction_filter: None
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fdfafd0c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fdfafc91f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.compression: NoCompression
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.num_levels: 7
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c4e45ab2-4273-47c3-96b1-648e5316c944
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036974266, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036984447, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53793, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51382, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789036, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789036984636, "job": 1, "event": "recovery_finished"}
Nov 22 05:23:56 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 22 05:23:57 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:23:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fdfaff2e00
Nov 22 05:23:57 compute-0 ceph-mon[75840]: rocksdb: DB pointer 0x55fdfb07c000
Nov 22 05:23:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:23:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 1.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 1.68 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 05:23:57 compute-0 ceph-mon[75840]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???) e1 preinit fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).mds e1 new map
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 05:23:57 compute-0 ceph-mon[75840]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.037437176 +0000 UTC m=+0.072843875 container create 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 05:23:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 05:23:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 05:23:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 05:23:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 05:23:57 compute-0 systemd[1]: Started libpod-conmon-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope.
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:56.995786951 +0000 UTC m=+0.031193690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 05:23:57 compute-0 ceph-mon[75840]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 05:23:57 compute-0 ceph-mon[75840]: fsmap 
Nov 22 05:23:57 compute-0 ceph-mon[75840]: osdmap e1: 0 total, 0 up, 0 in
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mgrmap e1: no daemons active
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.152815498 +0000 UTC m=+0.188222287 container init 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.160813451 +0000 UTC m=+0.196220190 container start 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.164881758 +0000 UTC m=+0.200288487 container attach 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:23:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 22 05:23:57 compute-0 systemd[1]: libpod-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope: Deactivated successfully.
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.601931679 +0000 UTC m=+0.637338368 container died 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f35211947f924f28247db1d779c2b1aaadea143b70bdbcce111322ae491c78a2-merged.mount: Deactivated successfully.
Nov 22 05:23:57 compute-0 podman[75841]: 2025-11-22 05:23:57.655388188 +0000 UTC m=+0.690794887 container remove 6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95 (image=quay.io/ceph/ceph:v18, name=epic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:23:57 compute-0 systemd[1]: libpod-conmon-6bf9825e2d95cf195865e9fd9a18019255e097dd1f1e7b9d6b08cb93ae25df95.scope: Deactivated successfully.
Nov 22 05:23:57 compute-0 podman[75935]: 2025-11-22 05:23:57.712904275 +0000 UTC m=+0.036190622 container create ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:23:57 compute-0 systemd[1]: Started libpod-conmon-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope.
Nov 22 05:23:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:57 compute-0 podman[75935]: 2025-11-22 05:23:57.786999831 +0000 UTC m=+0.110286188 container init ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:23:57 compute-0 podman[75935]: 2025-11-22 05:23:57.698326837 +0000 UTC m=+0.021613204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:57 compute-0 podman[75935]: 2025-11-22 05:23:57.79785924 +0000 UTC m=+0.121145597 container start ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:23:57 compute-0 podman[75935]: 2025-11-22 05:23:57.802201515 +0000 UTC m=+0.125487872 container attach ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:23:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 22 05:23:58 compute-0 systemd[1]: libpod-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope: Deactivated successfully.
Nov 22 05:23:58 compute-0 podman[75935]: 2025-11-22 05:23:58.236517743 +0000 UTC m=+0.559804130 container died ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-aed792b4eba8b6cdd8a967da956d66156e6c54a2c0e57a991deda24f88812480-merged.mount: Deactivated successfully.
Nov 22 05:23:58 compute-0 podman[75935]: 2025-11-22 05:23:58.293280659 +0000 UTC m=+0.616566996 container remove ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6 (image=quay.io/ceph/ceph:v18, name=relaxed_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:23:58 compute-0 systemd[1]: libpod-conmon-ac0c166e1d1dfb7249d5b24e6ebbfafb432eaa0303aa7cd3fc8c12d381e095d6.scope: Deactivated successfully.
Nov 22 05:23:58 compute-0 systemd[1]: Reloading.
Nov 22 05:23:58 compute-0 systemd-rc-local-generator[76020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:58 compute-0 systemd-sysv-generator[76023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:58 compute-0 systemd[1]: Reloading.
Nov 22 05:23:58 compute-0 systemd-sysv-generator[76060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:23:58 compute-0 systemd-rc-local-generator[76056]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:23:58 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mscchl for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:23:59 compute-0 podman[76114]: 2025-11-22 05:23:59.158960097 +0000 UTC m=+0.056680926 container create 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:23:59 compute-0 podman[76114]: 2025-11-22 05:23:59.12894877 +0000 UTC m=+0.026669689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45c926c1a7cbf4a8b15717a14910d22a3a47446d15d56817e9d48eda43b40114/merged/var/lib/ceph/mgr/ceph-compute-0.mscchl supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 podman[76114]: 2025-11-22 05:23:59.248770421 +0000 UTC m=+0.146491290 container init 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:23:59 compute-0 podman[76114]: 2025-11-22 05:23:59.258971381 +0000 UTC m=+0.156692250 container start 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 05:23:59 compute-0 bash[76114]: 73442774e72467ba7f22ad6ebe97af6c626dd686b7ea1fdca95f79a61ca9f40f
Nov 22 05:23:59 compute-0 systemd[1]: Started Ceph mgr.compute-0.mscchl for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.366823564 +0000 UTC m=+0.054633741 container create 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:23:59 compute-0 systemd[1]: Started libpod-conmon-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope.
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.340926387 +0000 UTC m=+0.028736544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Nov 22 05:23:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.487156938 +0000 UTC m=+0.174967145 container init 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.500976915 +0000 UTC m=+0.188787082 container start 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.505545196 +0000 UTC m=+0.193355373 container attach 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Nov 22 05:23:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:23:59.737+0000 7f82420f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:23:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:23:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983410945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:23:59 compute-0 funny_haibt[76175]: 
Nov 22 05:23:59 compute-0 funny_haibt[76175]: {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "health": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "status": "HEALTH_OK",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "checks": {},
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "mutes": []
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "election_epoch": 5,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "quorum": [
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         0
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     ],
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "quorum_names": [
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "compute-0"
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     ],
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "quorum_age": 2,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "monmap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "epoch": 1,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "min_mon_release_name": "reef",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_mons": 1
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "osdmap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "epoch": 1,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_osds": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_up_osds": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "osd_up_since": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_in_osds": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "osd_in_since": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_remapped_pgs": 0
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "pgmap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "pgs_by_state": [],
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_pgs": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_pools": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_objects": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "data_bytes": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "bytes_used": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "bytes_avail": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "bytes_total": 0
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "fsmap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "epoch": 1,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "by_rank": [],
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "up:standby": 0
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "mgrmap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "available": false,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "num_standbys": 0,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "modules": [
Nov 22 05:23:59 compute-0 funny_haibt[76175]:             "iostat",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:             "nfs",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:             "restful"
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         ],
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "services": {}
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "servicemap": {
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "epoch": 1,
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:23:59 compute-0 funny_haibt[76175]:         "services": {}
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     },
Nov 22 05:23:59 compute-0 funny_haibt[76175]:     "progress_events": {}
Nov 22 05:23:59 compute-0 funny_haibt[76175]: }
Nov 22 05:23:59 compute-0 systemd[1]: libpod-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope: Deactivated successfully.
Nov 22 05:23:59 compute-0 podman[76135]: 2025-11-22 05:23:59.961459757 +0000 UTC m=+0.649269924 container died 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:23:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:23:59.976+0000 7f82420f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:23:59 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Nov 22 05:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a43712c91ca9909a17641b6e306ca1c956120ed622d8ac4a60963ade2212c1-merged.mount: Deactivated successfully.
Nov 22 05:23:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2983410945' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:00 compute-0 podman[76135]: 2025-11-22 05:24:00.007781957 +0000 UTC m=+0.695592084 container remove 0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204 (image=quay.io/ceph/ceph:v18, name=funny_haibt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:24:00 compute-0 systemd[1]: libpod-conmon-0fa08b56d5f49a82f8027b3ec408e7b41b0e955056c2d0cbedb240dacf333204.scope: Deactivated successfully.
Nov 22 05:24:01 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.11245335 +0000 UTC m=+0.060020264 container create 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:02 compute-0 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:24:02 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Nov 22 05:24:02 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:02.130+0000 7f82420f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:24:02 compute-0 systemd[1]: Started libpod-conmon-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope.
Nov 22 05:24:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.085016551 +0000 UTC m=+0.032583566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.187977055 +0000 UTC m=+0.135544009 container init 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.194339954 +0000 UTC m=+0.141906908 container start 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.198259097 +0000 UTC m=+0.145826061 container attach 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:24:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621867992' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:02 compute-0 nice_haibt[76240]: 
Nov 22 05:24:02 compute-0 nice_haibt[76240]: {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "health": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "status": "HEALTH_OK",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "checks": {},
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "mutes": []
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "election_epoch": 5,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "quorum": [
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         0
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     ],
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "quorum_names": [
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "compute-0"
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     ],
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "quorum_age": 5,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "monmap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "epoch": 1,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "min_mon_release_name": "reef",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_mons": 1
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "osdmap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "epoch": 1,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_osds": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_up_osds": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "osd_up_since": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_in_osds": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "osd_in_since": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_remapped_pgs": 0
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "pgmap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "pgs_by_state": [],
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_pgs": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_pools": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_objects": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "data_bytes": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "bytes_used": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "bytes_avail": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "bytes_total": 0
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "fsmap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "epoch": 1,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "by_rank": [],
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "up:standby": 0
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "mgrmap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "available": false,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "num_standbys": 0,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "modules": [
Nov 22 05:24:02 compute-0 nice_haibt[76240]:             "iostat",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:             "nfs",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:             "restful"
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         ],
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "services": {}
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "servicemap": {
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "epoch": 1,
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:02 compute-0 nice_haibt[76240]:         "services": {}
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     },
Nov 22 05:24:02 compute-0 nice_haibt[76240]:     "progress_events": {}
Nov 22 05:24:02 compute-0 nice_haibt[76240]: }
Nov 22 05:24:02 compute-0 systemd[1]: libpod-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope: Deactivated successfully.
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.588698671 +0000 UTC m=+0.536265585 container died 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a236154c7e49c41f77659cb1fc8d35eb1778e1ad5e737bf767d18ffd5d38db2-merged.mount: Deactivated successfully.
Nov 22 05:24:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1621867992' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:02 compute-0 podman[76224]: 2025-11-22 05:24:02.63238543 +0000 UTC m=+0.579952354 container remove 3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71 (image=quay.io/ceph/ceph:v18, name=nice_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:24:02 compute-0 systemd[1]: libpod-conmon-3f856363111e57007407b3133e01ebdd138f1db2ed84391eda8c837bbe033a71.scope: Deactivated successfully.
Nov 22 05:24:03 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Nov 22 05:24:03 compute-0 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:24:03 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 05:24:03 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:03.734+0000 7f82420f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]:   from numpy import show_config as show_numpy_config
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.269+0000 7f82420f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.511+0000 7f82420f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 podman[76277]: 2025-11-22 05:24:04.733504691 +0000 UTC m=+0.070195534 container create fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Nov 22 05:24:04 compute-0 systemd[1]: Started libpod-conmon-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope.
Nov 22 05:24:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:04 compute-0 podman[76277]: 2025-11-22 05:24:04.707240124 +0000 UTC m=+0.043931017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:04 compute-0 podman[76277]: 2025-11-22 05:24:04.812855457 +0000 UTC m=+0.149546310 container init fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:04 compute-0 podman[76277]: 2025-11-22 05:24:04.827843205 +0000 UTC m=+0.164534048 container start fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:04 compute-0 podman[76277]: 2025-11-22 05:24:04.832723005 +0000 UTC m=+0.169413868 container attach fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 05:24:04 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Nov 22 05:24:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:04.985+0000 7f82420f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 05:24:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422140615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:05 compute-0 infallible_poincare[76294]: 
Nov 22 05:24:05 compute-0 infallible_poincare[76294]: {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "health": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "status": "HEALTH_OK",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "checks": {},
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "mutes": []
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "election_epoch": 5,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "quorum": [
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         0
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     ],
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "quorum_names": [
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "compute-0"
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     ],
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "quorum_age": 8,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "monmap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "epoch": 1,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "min_mon_release_name": "reef",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_mons": 1
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "osdmap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "epoch": 1,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_osds": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_up_osds": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "osd_up_since": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_in_osds": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "osd_in_since": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_remapped_pgs": 0
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "pgmap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "pgs_by_state": [],
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_pgs": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_pools": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_objects": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "data_bytes": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "bytes_used": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "bytes_avail": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "bytes_total": 0
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "fsmap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "epoch": 1,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "by_rank": [],
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "up:standby": 0
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "mgrmap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "available": false,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "num_standbys": 0,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "modules": [
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:             "iostat",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:             "nfs",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:             "restful"
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         ],
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "services": {}
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "servicemap": {
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "epoch": 1,
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:         "services": {}
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     },
Nov 22 05:24:05 compute-0 infallible_poincare[76294]:     "progress_events": {}
Nov 22 05:24:05 compute-0 infallible_poincare[76294]: }
Nov 22 05:24:05 compute-0 systemd[1]: libpod-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope: Deactivated successfully.
Nov 22 05:24:05 compute-0 podman[76277]: 2025-11-22 05:24:05.254424098 +0000 UTC m=+0.591114911 container died fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-65650ed6da0bdcb7d5611900d91e07e195722e15563228dfa96c6d27c89157f0-merged.mount: Deactivated successfully.
Nov 22 05:24:05 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1422140615' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:05 compute-0 podman[76277]: 2025-11-22 05:24:05.314177494 +0000 UTC m=+0.650868317 container remove fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e (image=quay.io/ceph/ceph:v18, name=infallible_poincare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:05 compute-0 systemd[1]: libpod-conmon-fad98bfb0fa476b1019ac19d4249b11d480a6c91efc8ba1294a36d4e0af9ce0e.scope: Deactivated successfully.
Nov 22 05:24:06 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Nov 22 05:24:06 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.403575353 +0000 UTC m=+0.063275211 container create 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:24:07 compute-0 systemd[1]: Started libpod-conmon-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope.
Nov 22 05:24:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.386724865 +0000 UTC m=+0.046424753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.503716691 +0000 UTC m=+0.163416649 container init 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.511635131 +0000 UTC m=+0.171335029 container start 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.51577145 +0000 UTC m=+0.175471348 container attach 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:24:07 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Nov 22 05:24:07 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Nov 22 05:24:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480979014' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:07 compute-0 busy_hoover[76347]: 
Nov 22 05:24:07 compute-0 busy_hoover[76347]: {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "health": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "status": "HEALTH_OK",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "checks": {},
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "mutes": []
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "election_epoch": 5,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "quorum": [
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         0
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     ],
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "quorum_names": [
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "compute-0"
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     ],
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "quorum_age": 10,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "monmap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "epoch": 1,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "min_mon_release_name": "reef",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_mons": 1
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "osdmap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "epoch": 1,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_osds": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_up_osds": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "osd_up_since": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_in_osds": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "osd_in_since": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_remapped_pgs": 0
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "pgmap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "pgs_by_state": [],
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_pgs": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_pools": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_objects": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "data_bytes": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "bytes_used": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "bytes_avail": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "bytes_total": 0
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "fsmap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "epoch": 1,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "by_rank": [],
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "up:standby": 0
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "mgrmap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "available": false,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "num_standbys": 0,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "modules": [
Nov 22 05:24:07 compute-0 busy_hoover[76347]:             "iostat",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:             "nfs",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:             "restful"
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         ],
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "services": {}
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "servicemap": {
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "epoch": 1,
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:07 compute-0 busy_hoover[76347]:         "services": {}
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     },
Nov 22 05:24:07 compute-0 busy_hoover[76347]:     "progress_events": {}
Nov 22 05:24:07 compute-0 busy_hoover[76347]: }
Nov 22 05:24:07 compute-0 systemd[1]: libpod-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope: Deactivated successfully.
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.912406468 +0000 UTC m=+0.572106336 container died 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-80573470f0925d753d231bfd804c2c81bbb983a97fa0eeb14e3f0515a6e89d10-merged.mount: Deactivated successfully.
Nov 22 05:24:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3480979014' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:07 compute-0 podman[76331]: 2025-11-22 05:24:07.984055401 +0000 UTC m=+0.643755259 container remove 60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1 (image=quay.io/ceph/ceph:v18, name=busy_hoover, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:24:07 compute-0 systemd[1]: libpod-conmon-60921bd7d2837c4577002864edaff65761bf43e68a9ce73b55ba2d2d2c9052b1.scope: Deactivated successfully.
Nov 22 05:24:08 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:08.668+0000 7f82420f8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 05:24:08 compute-0 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 05:24:08 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Nov 22 05:24:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.353+0000 7f82420f8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 05:24:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.623+0000 7f82420f8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Nov 22 05:24:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:09.864+0000 7f82420f8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 05:24:09 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 05:24:10 compute-0 podman[76384]: 2025-11-22 05:24:10.047726427 +0000 UTC m=+0.039552072 container create a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:24:10 compute-0 systemd[1]: Started libpod-conmon-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope.
Nov 22 05:24:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:10 compute-0 podman[76384]: 2025-11-22 05:24:10.032705408 +0000 UTC m=+0.024531083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:10 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:10.141+0000 7f82420f8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 05:24:10 compute-0 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 05:24:10 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Nov 22 05:24:10 compute-0 podman[76384]: 2025-11-22 05:24:10.145528532 +0000 UTC m=+0.137354227 container init a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:24:10 compute-0 podman[76384]: 2025-11-22 05:24:10.152836577 +0000 UTC m=+0.144662242 container start a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:24:10 compute-0 podman[76384]: 2025-11-22 05:24:10.156457322 +0000 UTC m=+0.148283017 container attach a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:24:10 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:10.361+0000 7f82420f8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 05:24:10 compute-0 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 05:24:10 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Nov 22 05:24:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141882630' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:10 compute-0 loving_swanson[76400]: 
Nov 22 05:24:10 compute-0 loving_swanson[76400]: {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "health": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "status": "HEALTH_OK",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "checks": {},
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "mutes": []
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "election_epoch": 5,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "quorum": [
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         0
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     ],
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "quorum_names": [
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "compute-0"
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     ],
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "quorum_age": 13,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "monmap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "epoch": 1,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "min_mon_release_name": "reef",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_mons": 1
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "osdmap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "epoch": 1,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_osds": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_up_osds": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "osd_up_since": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_in_osds": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "osd_in_since": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_remapped_pgs": 0
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "pgmap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "pgs_by_state": [],
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_pgs": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_pools": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_objects": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "data_bytes": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "bytes_used": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "bytes_avail": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "bytes_total": 0
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "fsmap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "epoch": 1,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "by_rank": [],
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "up:standby": 0
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "mgrmap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "available": false,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "num_standbys": 0,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "modules": [
Nov 22 05:24:10 compute-0 loving_swanson[76400]:             "iostat",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:             "nfs",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:             "restful"
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         ],
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "services": {}
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "servicemap": {
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "epoch": 1,
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:10 compute-0 loving_swanson[76400]:         "services": {}
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     },
Nov 22 05:24:10 compute-0 loving_swanson[76400]:     "progress_events": {}
Nov 22 05:24:10 compute-0 loving_swanson[76400]: }
Nov 22 05:24:10 compute-0 systemd[1]: libpod-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope: Deactivated successfully.
Nov 22 05:24:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3141882630' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:10 compute-0 podman[76426]: 2025-11-22 05:24:10.618679041 +0000 UTC m=+0.040896927 container died a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-772dc5df058b0152a28011b228c66c6d60d62850710e628c4f82a1f8233bb54d-merged.mount: Deactivated successfully.
Nov 22 05:24:10 compute-0 podman[76426]: 2025-11-22 05:24:10.666061489 +0000 UTC m=+0.088279365 container remove a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb (image=quay.io/ceph/ceph:v18, name=loving_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:10 compute-0 systemd[1]: libpod-conmon-a6875e3fcab7e1287fd3d7cd813e95dd123536f8b91fdc447ffe39a0b55da6fb.scope: Deactivated successfully.
Nov 22 05:24:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:11.380+0000 7f82420f8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 05:24:11 compute-0 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 05:24:11 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Nov 22 05:24:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:11.702+0000 7f82420f8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 05:24:11 compute-0 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 05:24:11 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Nov 22 05:24:12 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Nov 22 05:24:12 compute-0 podman[76441]: 2025-11-22 05:24:12.760008549 +0000 UTC m=+0.056084900 container create 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:24:12 compute-0 systemd[1]: Started libpod-conmon-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope.
Nov 22 05:24:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:12 compute-0 podman[76441]: 2025-11-22 05:24:12.734104811 +0000 UTC m=+0.030181152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:12 compute-0 podman[76441]: 2025-11-22 05:24:12.860857986 +0000 UTC m=+0.156934327 container init 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:24:12 compute-0 podman[76441]: 2025-11-22 05:24:12.8704232 +0000 UTC m=+0.166499511 container start 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:12 compute-0 podman[76441]: 2025-11-22 05:24:12.874176489 +0000 UTC m=+0.170252820 container attach 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:13 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:13.086+0000 7f82420f8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 05:24:13 compute-0 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 05:24:13 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Nov 22 05:24:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315193514' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:13 compute-0 zen_borg[76457]: 
Nov 22 05:24:13 compute-0 zen_borg[76457]: {
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "health": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "status": "HEALTH_OK",
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "checks": {},
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "mutes": []
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "election_epoch": 5,
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "quorum": [
Nov 22 05:24:13 compute-0 zen_borg[76457]:         0
Nov 22 05:24:13 compute-0 zen_borg[76457]:     ],
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "quorum_names": [
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "compute-0"
Nov 22 05:24:13 compute-0 zen_borg[76457]:     ],
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "quorum_age": 16,
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "monmap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "epoch": 1,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "min_mon_release_name": "reef",
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_mons": 1
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "osdmap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "epoch": 1,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_osds": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_up_osds": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "osd_up_since": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_in_osds": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "osd_in_since": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_remapped_pgs": 0
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "pgmap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "pgs_by_state": [],
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_pgs": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_pools": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_objects": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "data_bytes": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "bytes_used": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "bytes_avail": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "bytes_total": 0
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "fsmap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "epoch": 1,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "by_rank": [],
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "up:standby": 0
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "mgrmap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "available": false,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "num_standbys": 0,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "modules": [
Nov 22 05:24:13 compute-0 zen_borg[76457]:             "iostat",
Nov 22 05:24:13 compute-0 zen_borg[76457]:             "nfs",
Nov 22 05:24:13 compute-0 zen_borg[76457]:             "restful"
Nov 22 05:24:13 compute-0 zen_borg[76457]:         ],
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "services": {}
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "servicemap": {
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "epoch": 1,
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:13 compute-0 zen_borg[76457]:         "services": {}
Nov 22 05:24:13 compute-0 zen_borg[76457]:     },
Nov 22 05:24:13 compute-0 zen_borg[76457]:     "progress_events": {}
Nov 22 05:24:13 compute-0 zen_borg[76457]: }
Nov 22 05:24:13 compute-0 systemd[1]: libpod-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope: Deactivated successfully.
Nov 22 05:24:13 compute-0 podman[76483]: 2025-11-22 05:24:13.303756731 +0000 UTC m=+0.021605954 container died 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/315193514' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-422b9dc271ad498497670476e25955d5c4e6d6daa88e211fc8884aa86a0e68cc-merged.mount: Deactivated successfully.
Nov 22 05:24:13 compute-0 podman[76483]: 2025-11-22 05:24:13.356257525 +0000 UTC m=+0.074106728 container remove 00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e (image=quay.io/ceph/ceph:v18, name=zen_borg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:13 compute-0 systemd[1]: libpod-conmon-00f016643b8da6b0aead6330c82aa5722f086c3b722bad38cad387e5ccec549e.scope: Deactivated successfully.
Nov 22 05:24:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.124+0000 7f82420f8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Nov 22 05:24:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.375+0000 7f82420f8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.431400206 +0000 UTC m=+0.039957012 container create 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:15 compute-0 systemd[1]: Started libpod-conmon-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope.
Nov 22 05:24:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.506381335 +0000 UTC m=+0.114938201 container init 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.413318836 +0000 UTC m=+0.021875672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.511126212 +0000 UTC m=+0.119683028 container start 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.514379178 +0000 UTC m=+0.122935994 container attach 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:15.622+0000 7f82420f8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Nov 22 05:24:15 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Nov 22 05:24:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320238895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:15 compute-0 stoic_rubin[76515]: 
Nov 22 05:24:15 compute-0 stoic_rubin[76515]: {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "health": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "status": "HEALTH_OK",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "checks": {},
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "mutes": []
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "election_epoch": 5,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "quorum": [
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         0
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     ],
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "quorum_names": [
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "compute-0"
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     ],
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "quorum_age": 18,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "monmap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "epoch": 1,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "min_mon_release_name": "reef",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_mons": 1
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "osdmap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "epoch": 1,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_osds": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_up_osds": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "osd_up_since": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_in_osds": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "osd_in_since": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_remapped_pgs": 0
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "pgmap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "pgs_by_state": [],
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_pgs": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_pools": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_objects": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "data_bytes": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "bytes_used": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "bytes_avail": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "bytes_total": 0
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "fsmap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "epoch": 1,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "by_rank": [],
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "up:standby": 0
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "mgrmap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "available": false,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "num_standbys": 0,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "modules": [
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:             "iostat",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:             "nfs",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:             "restful"
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         ],
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "services": {}
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "servicemap": {
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "epoch": 1,
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:         "services": {}
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     },
Nov 22 05:24:15 compute-0 stoic_rubin[76515]:     "progress_events": {}
Nov 22 05:24:15 compute-0 stoic_rubin[76515]: }
Nov 22 05:24:15 compute-0 systemd[1]: libpod-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope: Deactivated successfully.
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.882238032 +0000 UTC m=+0.490794838 container died 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-67de899847b747bcaa0ef85a14bcd99c683d61985821d3de27546f5a9ed62ed3-merged.mount: Deactivated successfully.
Nov 22 05:24:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3320238895' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:15 compute-0 podman[76498]: 2025-11-22 05:24:15.925307595 +0000 UTC m=+0.533864391 container remove 8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7 (image=quay.io/ceph/ceph:v18, name=stoic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:15 compute-0 systemd[1]: libpod-conmon-8ccfd80016991807960a3d93d1bf9f690fb01464c7fc83045b73cc27032b03b7.scope: Deactivated successfully.
Nov 22 05:24:16 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.116+0000 7f82420f8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Nov 22 05:24:16 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.348+0000 7f82420f8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Nov 22 05:24:16 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:16.943+0000 7f82420f8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 05:24:16 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 05:24:17 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:17.639+0000 7f82420f8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:17 compute-0 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:17 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.00551651 +0000 UTC m=+0.043934577 container create af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:24:18 compute-0 systemd[1]: Started libpod-conmon-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope.
Nov 22 05:24:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.08611299 +0000 UTC m=+0.124531117 container init af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:17.991764085 +0000 UTC m=+0.030182172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.094455871 +0000 UTC m=+0.132873948 container start af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.098738945 +0000 UTC m=+0.137157042 container attach af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:18.337+0000 7f82420f8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/962159204' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]: 
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]: {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "health": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "status": "HEALTH_OK",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "checks": {},
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "mutes": []
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "election_epoch": 5,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "quorum": [
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         0
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     ],
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "quorum_names": [
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "compute-0"
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     ],
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "quorum_age": 21,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "monmap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "epoch": 1,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "min_mon_release_name": "reef",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_mons": 1
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "osdmap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "epoch": 1,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_osds": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_up_osds": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "osd_up_since": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_in_osds": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "osd_in_since": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_remapped_pgs": 0
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "pgmap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "pgs_by_state": [],
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_pgs": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_pools": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_objects": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "data_bytes": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "bytes_used": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "bytes_avail": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "bytes_total": 0
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "fsmap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "epoch": 1,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "by_rank": [],
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "up:standby": 0
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "mgrmap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "available": false,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "num_standbys": 0,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "modules": [
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:             "iostat",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:             "nfs",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:             "restful"
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         ],
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "services": {}
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "servicemap": {
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "epoch": 1,
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:         "services": {}
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     },
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]:     "progress_events": {}
Nov 22 05:24:18 compute-0 affectionate_ritchie[76571]: }
Nov 22 05:24:18 compute-0 systemd[1]: libpod-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope: Deactivated successfully.
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.461828942 +0000 UTC m=+0.500247089 container died af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4325d182c65cc3a415dc889d1bc39b8634009ae7afd72adb022f883d939f79-merged.mount: Deactivated successfully.
Nov 22 05:24:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/962159204' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:18 compute-0 podman[76555]: 2025-11-22 05:24:18.518370302 +0000 UTC m=+0.556788399 container remove af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa (image=quay.io/ceph/ceph:v18, name=affectionate_ritchie, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:18 compute-0 systemd[1]: libpod-conmon-af44549f89d1636dd83cbd7c2f388585f977b433a17395331aa2f3b3c16550aa.scope: Deactivated successfully.
Nov 22 05:24:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:18.574+0000 7f82420f8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x555a6a2231e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mscchl
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mscchl(active, starting, since 0.0139237s)
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer INFO root] Starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: crash
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Manager daemon compute-0.mscchl is now available
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:24:18
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: progress
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [progress INFO root] Loading...
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [progress INFO root] No stored events to load
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: restful
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: status
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"} v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:18 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Nov 22 05:24:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 22 05:24:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:19 compute-0 ceph-mon[75840]: Activating manager daemon compute-0.mscchl
Nov 22 05:24:19 compute-0 ceph-mon[75840]: mgrmap e2: compute-0.mscchl(active, starting, since 0.0139237s)
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: Manager daemon compute-0.mscchl is now available
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:19 compute-0 ceph-mon[75840]: from='mgr.14102 192.168.122.100:0/2479852038' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:19 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mscchl(active, since 1.0286s)
Nov 22 05:24:20 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:20 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mscchl(active, since 2s)
Nov 22 05:24:20 compute-0 ceph-mon[75840]: mgrmap e3: compute-0.mscchl(active, since 1.0286s)
Nov 22 05:24:20 compute-0 podman[76688]: 2025-11-22 05:24:20.63108057 +0000 UTC m=+0.076842351 container create be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:24:20 compute-0 systemd[1]: Started libpod-conmon-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope.
Nov 22 05:24:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:20 compute-0 podman[76688]: 2025-11-22 05:24:20.60242106 +0000 UTC m=+0.048182911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:20 compute-0 podman[76688]: 2025-11-22 05:24:20.713548989 +0000 UTC m=+0.159310780 container init be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:20 compute-0 podman[76688]: 2025-11-22 05:24:20.723565315 +0000 UTC m=+0.169327106 container start be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:24:20 compute-0 podman[76688]: 2025-11-22 05:24:20.7279042 +0000 UTC m=+0.173666031 container attach be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:24:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 05:24:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870129946' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]: 
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]: {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "health": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "status": "HEALTH_OK",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "checks": {},
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "mutes": []
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "election_epoch": 5,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "quorum": [
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         0
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     ],
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "quorum_names": [
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "compute-0"
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     ],
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "quorum_age": 24,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "monmap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "epoch": 1,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "min_mon_release_name": "reef",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_mons": 1
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "osdmap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "epoch": 1,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_osds": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_up_osds": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "osd_up_since": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_in_osds": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "osd_in_since": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_remapped_pgs": 0
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "pgmap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "pgs_by_state": [],
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_pgs": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_pools": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_objects": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "data_bytes": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "bytes_used": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "bytes_avail": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "bytes_total": 0
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "fsmap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "epoch": 1,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "by_rank": [],
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "up:standby": 0
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "mgrmap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "available": true,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "num_standbys": 0,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "modules": [
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:             "iostat",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:             "nfs",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:             "restful"
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         ],
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "services": {}
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "servicemap": {
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "epoch": 1,
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "modified": "2025-11-22T05:23:54.066984+0000",
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:         "services": {}
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     },
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]:     "progress_events": {}
Nov 22 05:24:21 compute-0 keen_brahmagupta[76704]: }
Nov 22 05:24:21 compute-0 systemd[1]: libpod-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope: Deactivated successfully.
Nov 22 05:24:21 compute-0 podman[76688]: 2025-11-22 05:24:21.367183119 +0000 UTC m=+0.812944940 container died be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:24:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef24bac700ebc009c2d1dad1cdebf92b565f9163ffdcccd4c6276cb94aef9800-merged.mount: Deactivated successfully.
Nov 22 05:24:21 compute-0 podman[76688]: 2025-11-22 05:24:21.427222572 +0000 UTC m=+0.872984333 container remove be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76 (image=quay.io/ceph/ceph:v18, name=keen_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:21 compute-0 systemd[1]: libpod-conmon-be368a73a8cab2dd915bc6ee777ecbcc25d06104ed1e062d693f20043acf0c76.scope: Deactivated successfully.
Nov 22 05:24:21 compute-0 podman[76744]: 2025-11-22 05:24:21.521681639 +0000 UTC m=+0.063169457 container create 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:24:21 compute-0 systemd[1]: Started libpod-conmon-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope.
Nov 22 05:24:21 compute-0 podman[76744]: 2025-11-22 05:24:21.494654502 +0000 UTC m=+0.036142370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:21 compute-0 ceph-mon[75840]: mgrmap e4: compute-0.mscchl(active, since 2s)
Nov 22 05:24:21 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2870129946' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 05:24:21 compute-0 podman[76744]: 2025-11-22 05:24:21.642804765 +0000 UTC m=+0.184292633 container init 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:21 compute-0 podman[76744]: 2025-11-22 05:24:21.648555407 +0000 UTC m=+0.190043185 container start 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:21 compute-0 podman[76744]: 2025-11-22 05:24:21.76166826 +0000 UTC m=+0.303156038 container attach 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 05:24:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3208678017' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:24:22 compute-0 systemd[1]: libpod-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope: Deactivated successfully.
Nov 22 05:24:22 compute-0 podman[76744]: 2025-11-22 05:24:22.176641304 +0000 UTC m=+0.718129112 container died 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:24:22 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3208678017' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fe4e91af1e80f23c10bb8ea728709d3b497f4ef719a7d7586dce1356b8630c2-merged.mount: Deactivated successfully.
Nov 22 05:24:23 compute-0 podman[76744]: 2025-11-22 05:24:23.115746461 +0000 UTC m=+1.657234249 container remove 5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d (image=quay.io/ceph/ceph:v18, name=interesting_chatterjee, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:23 compute-0 systemd[1]: libpod-conmon-5ce3dc9aaa1f3b47c019aa6ac76c7e500424925e28b6ee5a693487c90e959e3d.scope: Deactivated successfully.
Nov 22 05:24:23 compute-0 podman[76801]: 2025-11-22 05:24:23.27172007 +0000 UTC m=+0.125124331 container create 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:23 compute-0 podman[76801]: 2025-11-22 05:24:23.181972849 +0000 UTC m=+0.035377140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:23 compute-0 systemd[1]: Started libpod-conmon-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope.
Nov 22 05:24:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:23 compute-0 podman[76801]: 2025-11-22 05:24:23.514616158 +0000 UTC m=+0.368020429 container init 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:23 compute-0 podman[76801]: 2025-11-22 05:24:23.568418666 +0000 UTC m=+0.421822907 container start 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:24:23 compute-0 podman[76801]: 2025-11-22 05:24:23.628950862 +0000 UTC m=+0.482355203 container attach 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 22 05:24:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 05:24:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 22 05:24:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mscchl(active, since 5s)
Nov 22 05:24:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 05:24:24 compute-0 systemd[1]: libpod-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope: Deactivated successfully.
Nov 22 05:24:24 compute-0 podman[76801]: 2025-11-22 05:24:24.222071446 +0000 UTC m=+1.075475727 container died 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e36c896a2937ad9137ef2e8190834602de0a8b1e7c9a7898ce08b3ac2fc08d-merged.mount: Deactivated successfully.
Nov 22 05:24:24 compute-0 podman[76801]: 2025-11-22 05:24:24.277074206 +0000 UTC m=+1.130478467 container remove 226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98 (image=quay.io/ceph/ceph:v18, name=charming_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:24 compute-0 systemd[1]: libpod-conmon-226984afbc1f05a039c185902f72f19c5aca60c37201f1a9735f0ec70b715c98.scope: Deactivated successfully.
Nov 22 05:24:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: ignoring --setuser ceph since I am not root
Nov 22 05:24:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: ignoring --setgroup ceph since I am not root
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Nov 22 05:24:24 compute-0 podman[76856]: 2025-11-22 05:24:24.330322959 +0000 UTC m=+0.037102845 container create 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:24 compute-0 systemd[1]: Started libpod-conmon-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope.
Nov 22 05:24:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:24 compute-0 podman[76856]: 2025-11-22 05:24:24.314684214 +0000 UTC m=+0.021464120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Nov 22 05:24:24 compute-0 podman[76856]: 2025-11-22 05:24:24.43280454 +0000 UTC m=+0.139584436 container init 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:24 compute-0 podman[76856]: 2025-11-22 05:24:24.436960949 +0000 UTC m=+0.143740835 container start 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:24 compute-0 podman[76856]: 2025-11-22 05:24:24.440213796 +0000 UTC m=+0.146993682 container attach 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:24.723+0000 7f53cd216140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Nov 22 05:24:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:24.974+0000 7f53cd216140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:24:24 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Nov 22 05:24:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 05:24:24 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651815984' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 05:24:24 compute-0 elastic_thompson[76897]: {
Nov 22 05:24:24 compute-0 elastic_thompson[76897]:     "epoch": 5,
Nov 22 05:24:24 compute-0 elastic_thompson[76897]:     "available": true,
Nov 22 05:24:24 compute-0 elastic_thompson[76897]:     "active_name": "compute-0.mscchl",
Nov 22 05:24:24 compute-0 elastic_thompson[76897]:     "num_standby": 0
Nov 22 05:24:24 compute-0 elastic_thompson[76897]: }
Nov 22 05:24:25 compute-0 systemd[1]: libpod-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope: Deactivated successfully.
Nov 22 05:24:25 compute-0 podman[76856]: 2025-11-22 05:24:25.001356291 +0000 UTC m=+0.708136177 container died 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:24:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-df95d5296059388bfcfe1cdec2af317f528b0276bbb47b0eb530d0de3ec3d6dc-merged.mount: Deactivated successfully.
Nov 22 05:24:25 compute-0 podman[76856]: 2025-11-22 05:24:25.039074999 +0000 UTC m=+0.745854885 container remove 4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4 (image=quay.io/ceph/ceph:v18, name=elastic_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:25 compute-0 systemd[1]: libpod-conmon-4d441074647f1c502207115b20d9175033f42e01ed37fd400a3074a83579fbb4.scope: Deactivated successfully.
Nov 22 05:24:25 compute-0 podman[76934]: 2025-11-22 05:24:25.135579496 +0000 UTC m=+0.068011253 container create a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:25 compute-0 systemd[1]: Started libpod-conmon-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope.
Nov 22 05:24:25 compute-0 podman[76934]: 2025-11-22 05:24:25.106738022 +0000 UTC m=+0.039169849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3152519921' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 05:24:25 compute-0 ceph-mon[75840]: mgrmap e5: compute-0.mscchl(active, since 5s)
Nov 22 05:24:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1651815984' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 05:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:25 compute-0 podman[76934]: 2025-11-22 05:24:25.24028496 +0000 UTC m=+0.172716747 container init a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:24:25 compute-0 podman[76934]: 2025-11-22 05:24:25.249505594 +0000 UTC m=+0.181937381 container start a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:24:25 compute-0 podman[76934]: 2025-11-22 05:24:25.25407811 +0000 UTC m=+0.186509907 container attach a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:26 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Nov 22 05:24:27 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:27.178+0000 7f53cd216140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:24:27 compute-0 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:24:27 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Nov 22 05:24:28 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Nov 22 05:24:28 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:28.865+0000 7f53cd216140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:24:28 compute-0 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:24:28 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 05:24:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 05:24:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 05:24:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]:   from numpy import show_config as show_numpy_config
Nov 22 05:24:29 compute-0 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:24:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:29.385+0000 7f53cd216140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:24:29 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Nov 22 05:24:29 compute-0 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:24:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:29.628+0000 7f53cd216140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:24:29 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Nov 22 05:24:29 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Nov 22 05:24:30 compute-0 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 05:24:30 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:30.095+0000 7f53cd216140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 05:24:30 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Nov 22 05:24:31 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Nov 22 05:24:32 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 05:24:32 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Nov 22 05:24:32 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Nov 22 05:24:33 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:33.636+0000 7f53cd216140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 05:24:33 compute-0 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 05:24:33 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.300+0000 7f53cd216140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.571+0000 7f53cd216140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:34.794+0000 7f53cd216140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 05:24:34 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 05:24:35 compute-0 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 05:24:35 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Nov 22 05:24:35 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:35.066+0000 7f53cd216140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 05:24:35 compute-0 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 05:24:35 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Nov 22 05:24:35 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:35.311+0000 7f53cd216140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 05:24:36 compute-0 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 05:24:36 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:36.312+0000 7f53cd216140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 05:24:36 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Nov 22 05:24:36 compute-0 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 05:24:36 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Nov 22 05:24:36 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:36.617+0000 7f53cd216140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 05:24:37 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Nov 22 05:24:38 compute-0 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 05:24:38 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Nov 22 05:24:38 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:38.047+0000 7f53cd216140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.091+0000 7f53cd216140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Nov 22 05:24:40 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.338+0000 7f53cd216140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:40.597+0000 7f53cd216140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Nov 22 05:24:40 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 05:24:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.128+0000 7f53cd216140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 05:24:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.370+0000 7f53cd216140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 05:24:41 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 05:24:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:41.992+0000 7f53cd216140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 05:24:42 compute-0 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:42.672+0000 7f53cd216140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 05:24:42 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 05:24:43 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:43.388+0000 7f53cd216140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 05:24:43 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:24:43.623+0000 7f53cd216140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mscchl restarted
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mscchl
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x5579d3e051e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mscchl(active, starting, since 0.0117316s)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Starting
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Manager daemon compute-0.mscchl is now available
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:24:43
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 05:24:43 compute-0 ceph-mon[75840]: Active manager daemon compute-0.mscchl restarted
Nov 22 05:24:43 compute-0 ceph-mon[75840]: Activating manager daemon compute-0.mscchl
Nov 22 05:24:43 compute-0 ceph-mon[75840]: osdmap e2: 0 total, 0 up, 0 in
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mgrmap e6: compute-0.mscchl(active, starting, since 0.0117316s)
Nov 22 05:24:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mscchl", "id": "compute-0.mscchl"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mon[75840]: Manager daemon compute-0.mscchl is now available
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: crash
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Starting
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: progress
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [progress INFO root] Loading...
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [progress INFO root] No stored events to load
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: restful
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: status
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Nov 22 05:24:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"} v 0) v1
Nov 22 05:24:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Nov 22 05:24:43 compute-0 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Nov 22 05:24:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mscchl(active, since 1.02397s)
Nov 22 05:24:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 05:24:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 05:24:44 compute-0 brave_carson[76950]: {
Nov 22 05:24:44 compute-0 brave_carson[76950]:     "mgrmap_epoch": 7,
Nov 22 05:24:44 compute-0 brave_carson[76950]:     "initialized": true
Nov 22 05:24:44 compute-0 brave_carson[76950]: }
Nov 22 05:24:44 compute-0 systemd[1]: libpod-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope: Deactivated successfully.
Nov 22 05:24:44 compute-0 podman[76934]: 2025-11-22 05:24:44.681027671 +0000 UTC m=+19.613459428 container died a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:24:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 22 05:24:44 compute-0 ceph-mon[75840]: Found migration_current of "None". Setting to last migration.
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/mirror_snapshot_schedule"}]: dispatch
Nov 22 05:24:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mscchl/trash_purge_schedule"}]: dispatch
Nov 22 05:24:44 compute-0 ceph-mon[75840]: mgrmap e7: compute-0.mscchl(active, since 1.02397s)
Nov 22 05:24:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 22 05:24:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-15ae24b66888fe9c33232c4c25e2ab09f82973fd8a4dadd367e2d9d78e9ee41f-merged.mount: Deactivated successfully.
Nov 22 05:24:44 compute-0 podman[76934]: 2025-11-22 05:24:44.86837776 +0000 UTC m=+19.800809547 container remove a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:44 compute-0 systemd[1]: libpod-conmon-a6d731273176531d670196fb9bd1ec6a74893f85cb70336b3e91e53a88b3ab48.scope: Deactivated successfully.
Nov 22 05:24:44 compute-0 podman[77114]: 2025-11-22 05:24:44.95990445 +0000 UTC m=+0.063481198 container create 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:24:45 compute-0 systemd[1]: Started libpod-conmon-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope.
Nov 22 05:24:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:45 compute-0 podman[77114]: 2025-11-22 05:24:44.934586943 +0000 UTC m=+0.038163711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:45 compute-0 podman[77114]: 2025-11-22 05:24:45.067572345 +0000 UTC m=+0.171149133 container init 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:24:45 compute-0 podman[77114]: 2025-11-22 05:24:45.078110516 +0000 UTC m=+0.181687284 container start 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:45 compute-0 podman[77114]: 2025-11-22 05:24:45.08262758 +0000 UTC m=+0.186204348 container attach 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 22 05:24:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:24:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:45 compute-0 systemd[1]: libpod-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope: Deactivated successfully.
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 05:24:45 compute-0 podman[77169]: 2025-11-22 05:24:45.789142026 +0000 UTC m=+0.024652870 container died 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-29f26b2fc7ae9c43bef3e61abd87fab0d1bd87afb949f6b8d0ad3ff5e451f608-merged.mount: Deactivated successfully.
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:45 compute-0 podman[77169]: 2025-11-22 05:24:45.839926525 +0000 UTC m=+0.075437289 container remove 0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7 (image=quay.io/ceph/ceph:v18, name=strange_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:45 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mscchl(active, since 2s)
Nov 22 05:24:45 compute-0 systemd[1]: libpod-conmon-0b3f5f380df393e2fc901f917441ef082d8b827d9e2d59e6f1621cf6d9cbfbc7.scope: Deactivated successfully.
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 05:24:45 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 05:24:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:24:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:45 compute-0 podman[77195]: 2025-11-22 05:24:45.931826575 +0000 UTC m=+0.058437699 container create b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:24:45 compute-0 systemd[1]: Started libpod-conmon-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope.
Nov 22 05:24:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:45.911379713 +0000 UTC m=+0.037990887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:46.007197981 +0000 UTC m=+0.133809125 container init b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:46.013377652 +0000 UTC m=+0.139988786 container start b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:46.017303679 +0000 UTC m=+0.143914853 container attach b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 22 05:24:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_user
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 22 05:24:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 22 05:24:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_config
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 22 05:24:46 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 22 05:24:46 compute-0 zealous_brahmagupta[77212]: ssh user set to ceph-admin. sudo will be used
Nov 22 05:24:46 compute-0 systemd[1]: libpod-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope: Deactivated successfully.
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:46.555444699 +0000 UTC m=+0.682055913 container died b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-75408227737b2a3ea0a86c268f487356e2007740b22bfc8002ede5e11aaf7404-merged.mount: Deactivated successfully.
Nov 22 05:24:46 compute-0 podman[77195]: 2025-11-22 05:24:46.608726556 +0000 UTC m=+0.735337720 container remove b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd (image=quay.io/ceph/ceph:v18, name=zealous_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:24:46 compute-0 systemd[1]: libpod-conmon-b92c2b9178b8b71d7733097024de531d3a8e4ab212091e7b1e46f27d9d5a25bd.scope: Deactivated successfully.
Nov 22 05:24:46 compute-0 podman[77250]: 2025-11-22 05:24:46.681011326 +0000 UTC m=+0.047449287 container create 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:24:46 compute-0 systemd[1]: Started libpod-conmon-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope.
Nov 22 05:24:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:46 compute-0 podman[77250]: 2025-11-22 05:24:46.662410574 +0000 UTC m=+0.028848525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:46 compute-0 podman[77250]: 2025-11-22 05:24:46.765531614 +0000 UTC m=+0.131969615 container init 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:24:46 compute-0 podman[77250]: 2025-11-22 05:24:46.778552073 +0000 UTC m=+0.144990044 container start 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:46 compute-0 podman[77250]: 2025-11-22 05:24:46.783663994 +0000 UTC m=+0.150102015 container attach 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:46 compute-0 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Bus STARTING
Nov 22 05:24:46 compute-0 ceph-mon[75840]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:46 compute-0 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Serving on https://192.168.122.100:7150
Nov 22 05:24:46 compute-0 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Client ('192.168.122.100', 43068) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 05:24:46 compute-0 ceph-mon[75840]: mgrmap e8: compute-0.mscchl(active, since 2s)
Nov 22 05:24:46 compute-0 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Serving on http://192.168.122.100:8765
Nov 22 05:24:46 compute-0 ceph-mon[75840]: [22/Nov/2025:05:24:45] ENGINE Bus STARTED
Nov 22 05:24:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019923970 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 22 05:24:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: [cephadm INFO root] Set ssh private key
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 22 05:24:47 compute-0 systemd[1]: libpod-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope: Deactivated successfully.
Nov 22 05:24:47 compute-0 podman[77250]: 2025-11-22 05:24:47.362865593 +0000 UTC m=+0.729303524 container died 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4863ad6eb015b20699539e8a69252f7a37e5e8f847669b4a28d772e6b119bdc-merged.mount: Deactivated successfully.
Nov 22 05:24:47 compute-0 podman[77250]: 2025-11-22 05:24:47.397377014 +0000 UTC m=+0.763814945 container remove 247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe (image=quay.io/ceph/ceph:v18, name=upbeat_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:47 compute-0 systemd[1]: libpod-conmon-247b6b0177b04af41e404b3ff2c2a3a1cb5eef5c49093187babdf39d78d58dbe.scope: Deactivated successfully.
Nov 22 05:24:47 compute-0 podman[77304]: 2025-11-22 05:24:47.49564955 +0000 UTC m=+0.071110430 container create 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:47 compute-0 systemd[1]: Started libpod-conmon-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope.
Nov 22 05:24:47 compute-0 podman[77304]: 2025-11-22 05:24:47.463836114 +0000 UTC m=+0.039297024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:47 compute-0 podman[77304]: 2025-11-22 05:24:47.60786606 +0000 UTC m=+0.183326890 container init 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:24:47 compute-0 podman[77304]: 2025-11-22 05:24:47.616758336 +0000 UTC m=+0.192219176 container start 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:24:47 compute-0 podman[77304]: 2025-11-22 05:24:47.633608689 +0000 UTC m=+0.209069739 container attach 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:24:47 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:47 compute-0 ceph-mon[75840]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:47 compute-0 ceph-mon[75840]: Set ssh ssh_user
Nov 22 05:24:47 compute-0 ceph-mon[75840]: Set ssh ssh_config
Nov 22 05:24:47 compute-0 ceph-mon[75840]: ssh user set to ceph-admin. sudo will be used
Nov 22 05:24:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 22 05:24:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:48 compute-0 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 22 05:24:48 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 22 05:24:48 compute-0 systemd[1]: libpod-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope: Deactivated successfully.
Nov 22 05:24:48 compute-0 podman[77304]: 2025-11-22 05:24:48.135068119 +0000 UTC m=+0.710528959 container died 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-33735800211a5740fa156810d3d82cfd78777fda7bcfbb78f0291c2e50fb89e0-merged.mount: Deactivated successfully.
Nov 22 05:24:48 compute-0 podman[77304]: 2025-11-22 05:24:48.17796945 +0000 UTC m=+0.753430290 container remove 4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057 (image=quay.io/ceph/ceph:v18, name=recursing_dewdney, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:24:48 compute-0 systemd[1]: libpod-conmon-4d6363294882bed9d9dda1dca3e4e3947fe698e87ea56174eb345d56e5523057.scope: Deactivated successfully.
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.241298094 +0000 UTC m=+0.046569733 container create 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:48 compute-0 systemd[1]: Started libpod-conmon-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope.
Nov 22 05:24:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.217766506 +0000 UTC m=+0.023038175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.31304801 +0000 UTC m=+0.118319699 container init 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.321842532 +0000 UTC m=+0.127114201 container start 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.326569052 +0000 UTC m=+0.131840721 container attach 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:48 compute-0 friendly_boyd[77373]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIPiZ7ibrX9u0jc8n2TadxqUhEaLFpm5hoxNk7E8sPzVr+7md04KUsVyLl7YefTfTCAtLesLv0Rgu5rzJ2QOUo0OMuFaPi6qKRqxC/WvpAyqe3xQYxOslHfgzEHMI8+kcs7/1ziCQ9EVoMBSqRsuBeOyMVLTs/yzR6xTv8E9xwbovlADFyvmgzXwA3Z+oeMxT0iudT9c50Hi6PeQBfJypCJyMsh2/Rzc3GKzKNVgV8DKirHuSqrZHTGzcdFgwgw2UEEt6KVNxLzPPsOWLuCiq78FKHFgVLSnFMGltzRbFNcegXdk6LQUSX5PETF+owCAWMWDgUaDWhwPTo7FmmMvW7GYSi3TI+jYuuWpy918L1Wh9Uyc67WsyCoELg2CIejA92oIWdIl5DlBmtbcaM0aBpJRFVxUBYE6R9envdGQOg+u+t8QrJb6MS6ebG+tH6CbFn8Snf6CXXokl7Q/PJuZWCbe1RP2PisYlql3o9zPU1hIA73eC66p13WiW9z3YCiX8= zuul@controller
Nov 22 05:24:48 compute-0 systemd[1]: libpod-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope: Deactivated successfully.
Nov 22 05:24:48 compute-0 podman[77357]: 2025-11-22 05:24:48.898234975 +0000 UTC m=+0.703506614 container died 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8f33f3a8965e00481711e4f25989818641b75db3164b24840efb59338370027-merged.mount: Deactivated successfully.
Nov 22 05:24:49 compute-0 podman[77357]: 2025-11-22 05:24:49.107865838 +0000 UTC m=+0.913137507 container remove 7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04 (image=quay.io/ceph/ceph:v18, name=friendly_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:49 compute-0 systemd[1]: libpod-conmon-7c39570854f0d1077d4d3bcd84112aa291446e41ae34ea0c20185d89463abe04.scope: Deactivated successfully.
Nov 22 05:24:49 compute-0 ceph-mon[75840]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:49 compute-0 ceph-mon[75840]: Set ssh ssh_identity_key
Nov 22 05:24:49 compute-0 ceph-mon[75840]: Set ssh private key
Nov 22 05:24:49 compute-0 ceph-mon[75840]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:49 compute-0 ceph-mon[75840]: Set ssh ssh_identity_pub
Nov 22 05:24:49 compute-0 podman[77409]: 2025-11-22 05:24:49.178147433 +0000 UTC m=+0.050359538 container create 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:49 compute-0 systemd[1]: Started libpod-conmon-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope.
Nov 22 05:24:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:49 compute-0 podman[77409]: 2025-11-22 05:24:49.152555889 +0000 UTC m=+0.024768084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:49 compute-0 podman[77409]: 2025-11-22 05:24:49.253706964 +0000 UTC m=+0.125919089 container init 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:24:49 compute-0 podman[77409]: 2025-11-22 05:24:49.260754858 +0000 UTC m=+0.132966983 container start 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:49 compute-0 podman[77409]: 2025-11-22 05:24:49.263951436 +0000 UTC m=+0.136163561 container attach 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:24:49 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:50 compute-0 sshd-session[77451]: Accepted publickey for ceph-admin from 192.168.122.100 port 57124 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:50 compute-0 systemd-logind[798]: New session 20 of user ceph-admin.
Nov 22 05:24:50 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 05:24:50 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 05:24:50 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 05:24:50 compute-0 ceph-mon[75840]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:50 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 22 05:24:50 compute-0 systemd[77455]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:50 compute-0 systemd[77455]: Queued start job for default target Main User Target.
Nov 22 05:24:50 compute-0 systemd[77455]: Created slice User Application Slice.
Nov 22 05:24:50 compute-0 systemd[77455]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 05:24:50 compute-0 systemd[77455]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 05:24:50 compute-0 systemd[77455]: Reached target Paths.
Nov 22 05:24:50 compute-0 systemd[77455]: Reached target Timers.
Nov 22 05:24:50 compute-0 systemd[77455]: Starting D-Bus User Message Bus Socket...
Nov 22 05:24:50 compute-0 systemd[77455]: Starting Create User's Volatile Files and Directories...
Nov 22 05:24:50 compute-0 sshd-session[77468]: Accepted publickey for ceph-admin from 192.168.122.100 port 57132 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:50 compute-0 systemd[77455]: Finished Create User's Volatile Files and Directories.
Nov 22 05:24:50 compute-0 systemd[77455]: Listening on D-Bus User Message Bus Socket.
Nov 22 05:24:50 compute-0 systemd[77455]: Reached target Sockets.
Nov 22 05:24:50 compute-0 systemd[77455]: Reached target Basic System.
Nov 22 05:24:50 compute-0 systemd[77455]: Reached target Main User Target.
Nov 22 05:24:50 compute-0 systemd[77455]: Startup finished in 161ms.
Nov 22 05:24:50 compute-0 systemd-logind[798]: New session 22 of user ceph-admin.
Nov 22 05:24:50 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 22 05:24:50 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 22 05:24:50 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 22 05:24:50 compute-0 sshd-session[77451]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:50 compute-0 sshd-session[77468]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:50 compute-0 sudo[77475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:50 compute-0 sudo[77475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:50 compute-0 sudo[77475]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:50 compute-0 sudo[77500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:50 compute-0 sudo[77500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:50 compute-0 sudo[77500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:50 compute-0 sshd-session[77525]: Accepted publickey for ceph-admin from 192.168.122.100 port 57134 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:50 compute-0 systemd-logind[798]: New session 23 of user ceph-admin.
Nov 22 05:24:50 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 22 05:24:50 compute-0 sshd-session[77525]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:50 compute-0 sudo[77529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:50 compute-0 sudo[77529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:50 compute-0 sudo[77529]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:51 compute-0 sudo[77555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 05:24:51 compute-0 sudo[77555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:51 compute-0 sudo[77555]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:51 compute-0 ceph-mon[75840]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:51 compute-0 sshd-session[77581]: Accepted publickey for ceph-admin from 192.168.122.100 port 57140 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:51 compute-0 systemd-logind[798]: New session 24 of user ceph-admin.
Nov 22 05:24:51 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 22 05:24:51 compute-0 sshd-session[77581]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:51 compute-0 sudo[77585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:51 compute-0 sudo[77585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:51 compute-0 sudo[77585]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:51 compute-0 sudo[77610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 22 05:24:51 compute-0 sudo[77610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:51 compute-0 sudo[77610]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:51 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 22 05:24:51 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 22 05:24:51 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:51 compute-0 sshd-session[77635]: Accepted publickey for ceph-admin from 192.168.122.100 port 57152 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:51 compute-0 systemd-logind[798]: New session 25 of user ceph-admin.
Nov 22 05:24:51 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 22 05:24:51 compute-0 sshd-session[77635]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:51 compute-0 sudo[77639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:51 compute-0 sudo[77639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:51 compute-0 sudo[77639]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053068 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:24:52 compute-0 sshd-session[77554]: Invalid user solana from 80.94.92.166 port 40868
Nov 22 05:24:52 compute-0 sudo[77664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:24:52 compute-0 sudo[77664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:52 compute-0 sudo[77664]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:52 compute-0 sshd-session[77554]: Connection closed by invalid user solana 80.94.92.166 port 40868 [preauth]
Nov 22 05:24:52 compute-0 sshd-session[77689]: Accepted publickey for ceph-admin from 192.168.122.100 port 57164 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:52 compute-0 systemd-logind[798]: New session 26 of user ceph-admin.
Nov 22 05:24:52 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 22 05:24:52 compute-0 sshd-session[77689]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:52 compute-0 ceph-mon[75840]: Deploying cephadm binary to compute-0
Nov 22 05:24:52 compute-0 sudo[77693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:52 compute-0 sudo[77693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:52 compute-0 sudo[77693]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:52 compute-0 sudo[77718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:24:52 compute-0 sudo[77718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:52 compute-0 sudo[77718]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:52 compute-0 sshd-session[77743]: Accepted publickey for ceph-admin from 192.168.122.100 port 57174 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:52 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Nov 22 05:24:52 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 22 05:24:52 compute-0 sshd-session[77743]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:52 compute-0 sudo[77747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:52 compute-0 sudo[77747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:52 compute-0 sudo[77747]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:53 compute-0 sudo[77772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 22 05:24:53 compute-0 sudo[77772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:53 compute-0 sudo[77772]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:53 compute-0 sshd-session[77797]: Accepted publickey for ceph-admin from 192.168.122.100 port 57190 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:53 compute-0 systemd-logind[798]: New session 28 of user ceph-admin.
Nov 22 05:24:53 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 22 05:24:53 compute-0 sshd-session[77797]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:53 compute-0 sudo[77801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:53 compute-0 sudo[77801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:53 compute-0 sudo[77801]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:53 compute-0 sudo[77826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:24:53 compute-0 sudo[77826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:53 compute-0 sudo[77826]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:53 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:53 compute-0 sshd-session[77851]: Accepted publickey for ceph-admin from 192.168.122.100 port 57198 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:53 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Nov 22 05:24:53 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 22 05:24:53 compute-0 sshd-session[77851]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:53 compute-0 sudo[77855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:53 compute-0 sudo[77855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:53 compute-0 sudo[77855]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:53 compute-0 sudo[77880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 22 05:24:53 compute-0 sudo[77880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:53 compute-0 sudo[77880]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:54 compute-0 sshd-session[77905]: Accepted publickey for ceph-admin from 192.168.122.100 port 57212 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:54 compute-0 systemd-logind[798]: New session 30 of user ceph-admin.
Nov 22 05:24:54 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 22 05:24:54 compute-0 sshd-session[77905]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:54 compute-0 sshd-session[77932]: Accepted publickey for ceph-admin from 192.168.122.100 port 57214 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:54 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Nov 22 05:24:54 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 22 05:24:54 compute-0 sshd-session[77932]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:54 compute-0 sudo[77936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:54 compute-0 sudo[77936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:54 compute-0 sudo[77936]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 sudo[77961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 22 05:24:55 compute-0 sudo[77961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:55 compute-0 sudo[77961]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 sshd-session[77986]: Accepted publickey for ceph-admin from 192.168.122.100 port 57228 ssh2: RSA SHA256:0wzg2xxsaO5ETNoDBhjxLkFLbxZzycXPjR7DF+4FiSM
Nov 22 05:24:55 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Nov 22 05:24:55 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 22 05:24:55 compute-0 sshd-session[77986]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 05:24:55 compute-0 sudo[77990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:55 compute-0 sudo[77990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:55 compute-0 sudo[77990]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 sudo[78015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 05:24:55 compute-0 sudo[78015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:55 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:55 compute-0 sudo[78015]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:24:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:55 compute-0 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Nov 22 05:24:55 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 05:24:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:24:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:55 compute-0 strange_boyd[77425]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 05:24:55 compute-0 systemd[1]: libpod-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope: Deactivated successfully.
Nov 22 05:24:55 compute-0 sudo[78061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:55 compute-0 sudo[78061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:55 compute-0 sudo[78061]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 podman[78074]: 2025-11-22 05:24:55.867142695 +0000 UTC m=+0.047059946 container died 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-caeadcf211dfb3c248a146e8e184d3f6f1b3eb1a2bf7f5f59d7b32e3e82a2de8-merged.mount: Deactivated successfully.
Nov 22 05:24:55 compute-0 podman[78074]: 2025-11-22 05:24:55.911021944 +0000 UTC m=+0.090939085 container remove 1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa (image=quay.io/ceph/ceph:v18, name=strange_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:24:55 compute-0 sudo[78097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:55 compute-0 sudo[78097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:55 compute-0 systemd[1]: libpod-conmon-1396503374cd2043c256c7738d3e60132f1296ea44fbe96877f472192350dbfa.scope: Deactivated successfully.
Nov 22 05:24:55 compute-0 sudo[78097]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:55 compute-0 sudo[78126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:55 compute-0 sudo[78126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:56 compute-0 sudo[78126]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.003121409 +0000 UTC m=+0.059147939 container create 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:24:56 compute-0 systemd[1]: Started libpod-conmon-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope.
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:55.970685736 +0000 UTC m=+0.026712296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:56 compute-0 sudo[78165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 22 05:24:56 compute-0 sudo[78165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.096024158 +0000 UTC m=+0.152050708 container init 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.10374247 +0000 UTC m=+0.159768990 container start 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.1069663 +0000 UTC m=+0.162992860 container attach 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.383198067 +0000 UTC m=+0.062228446 container create 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:24:56 compute-0 systemd[1]: Started libpod-conmon-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope.
Nov 22 05:24:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.354468995 +0000 UTC m=+0.033499454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.457694128 +0000 UTC m=+0.136724487 container init 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.46358525 +0000 UTC m=+0.142615619 container start 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.466948613 +0000 UTC m=+0.145978972 container attach 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:24:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:56 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 22 05:24:56 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 22 05:24:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 05:24:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:56 compute-0 gracious_keller[78189]: Scheduled mon update...
Nov 22 05:24:56 compute-0 systemd[1]: libpod-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope: Deactivated successfully.
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.687046184 +0000 UTC m=+0.743072704 container died 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c06cc3e85f0c434b6e15fb9ea969d71c26ba5a9c64c0cbec9fd68ae2df1437b-merged.mount: Deactivated successfully.
Nov 22 05:24:56 compute-0 podman[78127]: 2025-11-22 05:24:56.721607316 +0000 UTC m=+0.777633836 container remove 792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6 (image=quay.io/ceph/ceph:v18, name=gracious_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:56 compute-0 systemd[1]: libpod-conmon-792d9cb617c11695bdd5a16f28a96996505d87a47a1d5280045fd8f67b0d5be6.scope: Deactivated successfully.
Nov 22 05:24:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:56 compute-0 ceph-mon[75840]: Added host compute-0
Nov 22 05:24:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:24:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:56 compute-0 interesting_ishizaka[78250]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 05:24:56 compute-0 podman[78277]: 2025-11-22 05:24:56.780548419 +0000 UTC m=+0.042419570 container create 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:56 compute-0 systemd[1]: libpod-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope: Deactivated successfully.
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.78202378 +0000 UTC m=+0.461054179 container died 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:56 compute-0 systemd[1]: Started libpod-conmon-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope.
Nov 22 05:24:56 compute-0 podman[78224]: 2025-11-22 05:24:56.827708427 +0000 UTC m=+0.506738786 container remove 4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853 (image=quay.io/ceph/ceph:v18, name=interesting_ishizaka, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:24:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:56 compute-0 systemd[1]: libpod-conmon-4faa41156b53bfb1fdf9b50120cb4994d3898466422932ae18a1dfb38a9db853.scope: Deactivated successfully.
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:56 compute-0 sudo[78165]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 22 05:24:56 compute-0 podman[78277]: 2025-11-22 05:24:56.760093435 +0000 UTC m=+0.021964606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:56 compute-0 podman[78277]: 2025-11-22 05:24:56.864773718 +0000 UTC m=+0.126644879 container init 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:24:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:56 compute-0 podman[78277]: 2025-11-22 05:24:56.871283357 +0000 UTC m=+0.133154498 container start 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:24:56 compute-0 podman[78277]: 2025-11-22 05:24:56.874021492 +0000 UTC m=+0.135892723 container attach 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f10cb3c335c3a39fd1a5b5caf187f93150b270fdc2df42765f069528cda3807-merged.mount: Deactivated successfully.
Nov 22 05:24:56 compute-0 sudo[78310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:56 compute-0 sudo[78310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:56 compute-0 sudo[78310]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:56 compute-0 sudo[78335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:56 compute-0 sudo[78335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 sudo[78335]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:24:57 compute-0 sudo[78360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:57 compute-0 sudo[78360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 sudo[78360]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 sudo[78385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 05:24:57 compute-0 sudo[78385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:57 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 22 05:24:57 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 22 05:24:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:24:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:57 compute-0 inspiring_greider[78305]: Scheduled mgr update...
Nov 22 05:24:57 compute-0 sudo[78385]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:24:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:57 compute-0 systemd[1]: libpod-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope: Deactivated successfully.
Nov 22 05:24:57 compute-0 podman[78277]: 2025-11-22 05:24:57.418122436 +0000 UTC m=+0.679993577 container died 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc1994b3d782817bf8e21f35259201d3720800776c223f37e2547ea4d14f04cf-merged.mount: Deactivated successfully.
Nov 22 05:24:57 compute-0 podman[78277]: 2025-11-22 05:24:57.4636284 +0000 UTC m=+0.725499551 container remove 5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d (image=quay.io/ceph/ceph:v18, name=inspiring_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:24:57 compute-0 systemd[1]: libpod-conmon-5182546f4d1951052191ae10d31eb660919c9b576193538daf4f525f8b69269d.scope: Deactivated successfully.
Nov 22 05:24:57 compute-0 sudo[78452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:57 compute-0 sudo[78452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 sudo[78452]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 podman[78487]: 2025-11-22 05:24:57.54825088 +0000 UTC m=+0.056412164 container create cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:24:57 compute-0 sudo[78490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:57 compute-0 sudo[78490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 sudo[78490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 systemd[1]: Started libpod-conmon-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope.
Nov 22 05:24:57 compute-0 podman[78487]: 2025-11-22 05:24:57.528818344 +0000 UTC m=+0.036979678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:57 compute-0 sudo[78529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:57 compute-0 sudo[78529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:57 compute-0 sudo[78529]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:57 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:57 compute-0 podman[78487]: 2025-11-22 05:24:57.648247153 +0000 UTC m=+0.156408517 container init cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:24:57 compute-0 podman[78487]: 2025-11-22 05:24:57.661906939 +0000 UTC m=+0.170068263 container start cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:57 compute-0 podman[78487]: 2025-11-22 05:24:57.667017761 +0000 UTC m=+0.175179125 container attach cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:24:57 compute-0 sudo[78557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:24:57 compute-0 sudo[78557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:57 compute-0 ceph-mon[75840]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:57 compute-0 ceph-mon[75840]: Saving service mon spec with placement count:5
Nov 22 05:24:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:58 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:58 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service crash spec with placement *
Nov 22 05:24:58 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 22 05:24:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 05:24:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:58 compute-0 charming_driscoll[78536]: Scheduled crash update...
Nov 22 05:24:58 compute-0 systemd[1]: libpod-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope: Deactivated successfully.
Nov 22 05:24:58 compute-0 podman[78487]: 2025-11-22 05:24:58.25380877 +0000 UTC m=+0.761970064 container died cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:24:58 compute-0 podman[78673]: 2025-11-22 05:24:58.27052765 +0000 UTC m=+0.060471026 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4ad8476758a9dbe27e6dabda4a12393b3da073ea6577c5b67c2976e4f09ad91-merged.mount: Deactivated successfully.
Nov 22 05:24:58 compute-0 podman[78487]: 2025-11-22 05:24:58.316424694 +0000 UTC m=+0.824585988 container remove cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b (image=quay.io/ceph/ceph:v18, name=charming_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:24:58 compute-0 systemd[1]: libpod-conmon-cf444a4ad669b1ccda62b1ad1c49c06a7e8f9771c1299bb954f3dd1835c4111b.scope: Deactivated successfully.
Nov 22 05:24:58 compute-0 podman[78705]: 2025-11-22 05:24:58.379433829 +0000 UTC m=+0.042419260 container create 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:24:58 compute-0 systemd[1]: Started libpod-conmon-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope.
Nov 22 05:24:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:58 compute-0 podman[78705]: 2025-11-22 05:24:58.358717099 +0000 UTC m=+0.021702540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:58 compute-0 podman[78705]: 2025-11-22 05:24:58.461204561 +0000 UTC m=+0.124190002 container init 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:24:58 compute-0 podman[78705]: 2025-11-22 05:24:58.472866512 +0000 UTC m=+0.135851923 container start 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:58 compute-0 podman[78705]: 2025-11-22 05:24:58.475972128 +0000 UTC m=+0.138957539 container attach 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:24:58 compute-0 podman[78673]: 2025-11-22 05:24:58.557354389 +0000 UTC m=+0.347297755 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:24:58 compute-0 sudo[78557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:24:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:58 compute-0 sudo[78758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:58 compute-0 sudo[78758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:58 compute-0 sudo[78758]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:58 compute-0 sudo[78793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:58 compute-0 sudo[78793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:58 compute-0 sudo[78793]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:58 compute-0 sudo[78827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:58 compute-0 sudo[78827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:58 compute-0 sudo[78827]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:59 compute-0 sudo[78852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:24:59 compute-0 sudo[78852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 22 05:24:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3448207332' entity='client.admin' 
Nov 22 05:24:59 compute-0 systemd[1]: libpod-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope: Deactivated successfully.
Nov 22 05:24:59 compute-0 podman[78705]: 2025-11-22 05:24:59.091260751 +0000 UTC m=+0.754246202 container died 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b371bce5a649fb6b89e624f622887c2d855bf22b3e948c11fe9ab717182e5a13-merged.mount: Deactivated successfully.
Nov 22 05:24:59 compute-0 podman[78705]: 2025-11-22 05:24:59.135551531 +0000 UTC m=+0.798536942 container remove 811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a (image=quay.io/ceph/ceph:v18, name=gifted_ardinghelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:24:59 compute-0 systemd[1]: libpod-conmon-811936aeb290f3ae1f084dea39562b97b5055f7e645e868a8f5ad49af281fb0a.scope: Deactivated successfully.
Nov 22 05:24:59 compute-0 podman[78891]: 2025-11-22 05:24:59.231291127 +0000 UTC m=+0.058876092 container create 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:24:59 compute-0 ceph-mon[75840]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:59 compute-0 ceph-mon[75840]: Saving service mgr spec with placement count:2
Nov 22 05:24:59 compute-0 ceph-mon[75840]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:59 compute-0 ceph-mon[75840]: Saving service crash spec with placement *
Nov 22 05:24:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3448207332' entity='client.admin' 
Nov 22 05:24:59 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78916 (sysctl)
Nov 22 05:24:59 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 22 05:24:59 compute-0 systemd[1]: Started libpod-conmon-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope.
Nov 22 05:24:59 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 22 05:24:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:24:59 compute-0 podman[78891]: 2025-11-22 05:24:59.204414298 +0000 UTC m=+0.031999353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:24:59 compute-0 podman[78891]: 2025-11-22 05:24:59.313398019 +0000 UTC m=+0.140983044 container init 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:24:59 compute-0 podman[78891]: 2025-11-22 05:24:59.324452883 +0000 UTC m=+0.152037838 container start 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:24:59 compute-0 podman[78891]: 2025-11-22 05:24:59.327874937 +0000 UTC m=+0.155459992 container attach 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:24:59 compute-0 sudo[78852]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:59 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:24:59 compute-0 sudo[78964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:59 compute-0 sudo[78964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:59 compute-0 sudo[78964]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:59 compute-0 sudo[78989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:24:59 compute-0 sudo[78989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:59 compute-0 sudo[78989]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:59 compute-0 sudo[79014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:24:59 compute-0 sudo[79014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:59 compute-0 sudo[79014]: pam_unix(sudo:session): session closed for user root
Nov 22 05:24:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:24:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 22 05:24:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:24:59 compute-0 systemd[1]: libpod-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope: Deactivated successfully.
Nov 22 05:24:59 compute-0 sudo[79040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 05:24:59 compute-0 sudo[79040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:24:59 compute-0 podman[79060]: 2025-11-22 05:24:59.950187174 +0000 UTC m=+0.032639749 container died 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-707ca0aadb141ed5558c32c4b278aafed585df1b479b341f22d335c19fe49209-merged.mount: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79060]: 2025-11-22 05:25:00.011198835 +0000 UTC m=+0.093651430 container remove 6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082 (image=quay.io/ceph/ceph:v18, name=romantic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:25:00 compute-0 systemd[1]: libpod-conmon-6a0c767a63128a60bd529ce9b0c14768ecc9987172d366ce20cc67916d87f082.scope: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.087149616 +0000 UTC m=+0.045600436 container create d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:25:00 compute-0 sudo[79040]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:00 compute-0 systemd[1]: Started libpod-conmon-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope.
Nov 22 05:25:00 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.163606832 +0000 UTC m=+0.122057712 container init d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.068936104 +0000 UTC m=+0.027386944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.171295873 +0000 UTC m=+0.129746713 container start d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.175932211 +0000 UTC m=+0.134383081 container attach d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:00 compute-0 sudo[79118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:00 compute-0 sudo[79118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:00 compute-0 sudo[79118]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:00 compute-0 sudo[79145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:00 compute-0 sudo[79145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:00 compute-0 sudo[79145]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:00 compute-0 sudo[79170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:00 compute-0 sudo[79170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:00 compute-0 sudo[79170]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:00 compute-0 sudo[79195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- inventory --format=json-pretty --filter-for-batch
Nov 22 05:25:00 compute-0 sudo[79195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.713924116 +0000 UTC m=+0.048893227 container create 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:25:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:25:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:25:00 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:00 compute-0 ceph-mgr[76134]: [cephadm INFO root] Added label _admin to host compute-0
Nov 22 05:25:00 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 22 05:25:00 compute-0 jolly_lovelace[79115]: Added label _admin to host compute-0
Nov 22 05:25:00 compute-0 systemd[1]: Started libpod-conmon-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope.
Nov 22 05:25:00 compute-0 systemd[1]: libpod-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.758305709 +0000 UTC m=+0.716756529 container died d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-041b76b264ccf4832bcedf51e13f61252a3fa95956fea3a5ef0db6ba55dd275a-merged.mount: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.693198305 +0000 UTC m=+0.028167436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.789867027 +0000 UTC m=+0.124836178 container init 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.797139228 +0000 UTC m=+0.132108349 container start 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:25:00 compute-0 infallible_jang[79300]: 167 167
Nov 22 05:25:00 compute-0 systemd[1]: libpod-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.807306008 +0000 UTC m=+0.142275159 container attach 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:00 compute-0 podman[79081]: 2025-11-22 05:25:00.811992577 +0000 UTC m=+0.770443407 container remove d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1 (image=quay.io/ceph/ceph:v18, name=jolly_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.815971567 +0000 UTC m=+0.150940688 container died 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-02fadf327700d05a91fe3eba7f82c79fc9be839fd7e722b8d9501a0d0904096b-merged.mount: Deactivated successfully.
Nov 22 05:25:00 compute-0 systemd[1]: libpod-conmon-d5d5c1a8d9e228a6d8bfc538b704ece2a41f14af6fc4d23b9f747a00cdcc4bc1.scope: Deactivated successfully.
Nov 22 05:25:00 compute-0 podman[79281]: 2025-11-22 05:25:00.854544768 +0000 UTC m=+0.189513889 container remove 265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jang, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:00 compute-0 systemd[1]: libpod-conmon-265b8d92191ffd064066fe618dd3cccbc39b3f6a3011d90ca07f5321c33ee177.scope: Deactivated successfully.
Nov 22 05:25:00 compute-0 ceph-mon[75840]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:25:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:00 compute-0 podman[79323]: 2025-11-22 05:25:00.888901325 +0000 UTC m=+0.053558886 container create a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:25:00 compute-0 systemd[1]: Started libpod-conmon-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope.
Nov 22 05:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:00 compute-0 podman[79323]: 2025-11-22 05:25:00.863407073 +0000 UTC m=+0.028064644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:00 compute-0 podman[79323]: 2025-11-22 05:25:00.974082361 +0000 UTC m=+0.138739902 container init a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:00 compute-0 podman[79323]: 2025-11-22 05:25:00.985087284 +0000 UTC m=+0.149744855 container start a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:00 compute-0 podman[79323]: 2025-11-22 05:25:00.990577765 +0000 UTC m=+0.155235316 container attach a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 22 05:25:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 22 05:25:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1330127533' entity='client.admin' 
Nov 22 05:25:01 compute-0 systemd[1]: libpod-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope: Deactivated successfully.
Nov 22 05:25:01 compute-0 podman[79323]: 2025-11-22 05:25:01.553358482 +0000 UTC m=+0.718016043 container died a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-026d2363f8c5b9e56460abce794b84d0e4d75390ca7c62fd644ea71b2f1267ef-merged.mount: Deactivated successfully.
Nov 22 05:25:01 compute-0 podman[79323]: 2025-11-22 05:25:01.604180473 +0000 UTC m=+0.768837994 container remove a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d (image=quay.io/ceph/ceph:v18, name=frosty_bartik, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:01 compute-0 systemd[1]: libpod-conmon-a5a04b8c7dc4ae914c252dc9c0c9da59915894d5b3934b4930b4e5aeededfa6d.scope: Deactivated successfully.
Nov 22 05:25:01 compute-0 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 05:25:01 compute-0 podman[79382]: 2025-11-22 05:25:01.688758591 +0000 UTC m=+0.056453065 container create 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:01 compute-0 systemd[1]: Started libpod-conmon-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope.
Nov 22 05:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:01 compute-0 podman[79382]: 2025-11-22 05:25:01.66729169 +0000 UTC m=+0.034986214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:01 compute-0 podman[79382]: 2025-11-22 05:25:01.78093897 +0000 UTC m=+0.148633454 container init 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:25:01 compute-0 podman[79382]: 2025-11-22 05:25:01.787066008 +0000 UTC m=+0.154760482 container start 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:25:01 compute-0 podman[79382]: 2025-11-22 05:25:01.790188015 +0000 UTC m=+0.157882539 container attach 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:25:01 compute-0 ceph-mon[75840]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:25:01 compute-0 ceph-mon[75840]: Added label _admin to host compute-0
Nov 22 05:25:01 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1330127533' entity='client.admin' 
Nov 22 05:25:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 22 05:25:02 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1299912635' entity='client.admin' 
Nov 22 05:25:02 compute-0 great_spence[79399]: set mgr/dashboard/cluster/status
Nov 22 05:25:02 compute-0 systemd[1]: libpod-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope: Deactivated successfully.
Nov 22 05:25:02 compute-0 podman[79382]: 2025-11-22 05:25:02.444147694 +0000 UTC m=+0.811842168 container died 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-a23941f723b93759f831ddb787c7fdb81d8b298fc6699160df295e5f2074b081-merged.mount: Deactivated successfully.
Nov 22 05:25:02 compute-0 podman[79382]: 2025-11-22 05:25:02.493668687 +0000 UTC m=+0.861363201 container remove 25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92 (image=quay.io/ceph/ceph:v18, name=great_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:25:02 compute-0 systemd[1]: libpod-conmon-25887aa7fabe086a34d7fb2322c41ce4f1d84ac735399483ebd1a47145bbdb92.scope: Deactivated successfully.
Nov 22 05:25:02 compute-0 sudo[74827]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:02 compute-0 podman[79444]: 2025-11-22 05:25:02.717365878 +0000 UTC m=+0.055786778 container create 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:02 compute-0 systemd[1]: Started libpod-conmon-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope.
Nov 22 05:25:02 compute-0 podman[79444]: 2025-11-22 05:25:02.691350601 +0000 UTC m=+0.029771521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:02 compute-0 podman[79444]: 2025-11-22 05:25:02.829362402 +0000 UTC m=+0.167783282 container init 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:25:02 compute-0 podman[79444]: 2025-11-22 05:25:02.845689211 +0000 UTC m=+0.184110111 container start 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:25:02 compute-0 podman[79444]: 2025-11-22 05:25:02.85037155 +0000 UTC m=+0.188792410 container attach 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:25:03 compute-0 sudo[79488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxtbhmcwsilreaibidhdbejvdpttjtpa ; /usr/bin/python3'
Nov 22 05:25:03 compute-0 sudo[79488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:03 compute-0 python3[79490]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.292444194 +0000 UTC m=+0.071092709 container create e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:25:03 compute-0 systemd[1]: Started libpod-conmon-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope.
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.257965774 +0000 UTC m=+0.036614329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.389796825 +0000 UTC m=+0.168445410 container init e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.399876482 +0000 UTC m=+0.178524997 container start e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.404677925 +0000 UTC m=+0.183326440 container attach e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:25:03 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1299912635' entity='client.admin' 
Nov 22 05:25:03 compute-0 ceph-mgr[76134]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 22 05:25:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:03 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 05:25:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 22 05:25:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2682568180' entity='client.admin' 
Nov 22 05:25:03 compute-0 systemd[1]: libpod-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope: Deactivated successfully.
Nov 22 05:25:03 compute-0 podman[79491]: 2025-11-22 05:25:03.985110058 +0000 UTC m=+0.763758563 container died e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 05:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a88b4029d05eda3c48c7f19913dd26ae9870c37196cc3eecd61c99bc58a6de79-merged.mount: Deactivated successfully.
Nov 22 05:25:04 compute-0 podman[79491]: 2025-11-22 05:25:04.037357008 +0000 UTC m=+0.816005483 container remove e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5 (image=quay.io/ceph/ceph:v18, name=amazing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:25:04 compute-0 systemd[1]: libpod-conmon-e1eb4dbbb0b72ea155973eff8c05d5fed15cb316e2b694dce1ae8b24fc9f5af5.scope: Deactivated successfully.
Nov 22 05:25:04 compute-0 sudo[79488]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 determined_franklin[79460]: [
Nov 22 05:25:04 compute-0 determined_franklin[79460]:     {
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "available": false,
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "ceph_device": false,
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "lsm_data": {},
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "lvs": [],
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "path": "/dev/sr0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "rejected_reasons": [
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "Insufficient space (<5GB)",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "Has a FileSystem"
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         ],
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         "sys_api": {
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "actuators": null,
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "device_nodes": "sr0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "devname": "sr0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "human_readable_size": "482.00 KB",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "id_bus": "ata",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "model": "QEMU DVD-ROM",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "nr_requests": "2",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "parent": "/dev/sr0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "partitions": {},
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "path": "/dev/sr0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "removable": "1",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "rev": "2.5+",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "ro": "0",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "rotational": "1",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "sas_address": "",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "sas_device_handle": "",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "scheduler_mode": "mq-deadline",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "sectors": 0,
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "sectorsize": "2048",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "size": 493568.0,
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "support_discard": "2048",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "type": "disk",
Nov 22 05:25:04 compute-0 determined_franklin[79460]:             "vendor": "QEMU"
Nov 22 05:25:04 compute-0 determined_franklin[79460]:         }
Nov 22 05:25:04 compute-0 determined_franklin[79460]:     }
Nov 22 05:25:04 compute-0 determined_franklin[79460]: ]
Nov 22 05:25:04 compute-0 systemd[1]: libpod-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Deactivated successfully.
Nov 22 05:25:04 compute-0 systemd[1]: libpod-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Consumed 1.555s CPU time.
Nov 22 05:25:04 compute-0 podman[79444]: 2025-11-22 05:25:04.380226379 +0000 UTC m=+1.718647269 container died 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ca357f85484b4c95257faf852b5626d6a96cd04a732bd62aae620154941b280-merged.mount: Deactivated successfully.
Nov 22 05:25:04 compute-0 ceph-mon[75840]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:04 compute-0 ceph-mon[75840]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 05:25:04 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2682568180' entity='client.admin' 
Nov 22 05:25:04 compute-0 podman[79444]: 2025-11-22 05:25:04.448881749 +0000 UTC m=+1.787302629 container remove 4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:04 compute-0 systemd[1]: libpod-conmon-4b628b8a15ed6bcfa80f471cd273363130d9845ec650981ac768be1d80dec06c.scope: Deactivated successfully.
Nov 22 05:25:04 compute-0 sudo[79195]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:25:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:04 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 22 05:25:04 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 22 05:25:04 compute-0 sudo[81493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:04 compute-0 sudo[81493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:04 compute-0 sudo[81493]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 sudo[81541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 22 05:25:04 compute-0 sudo[81541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:04 compute-0 sudo[81541]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 sudo[81566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:04 compute-0 sudo[81566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:04 compute-0 sudo[81566]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 sudo[81591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph
Nov 22 05:25:04 compute-0 sudo[81591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:04 compute-0 sudo[81591]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:04 compute-0 sudo[81640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:04 compute-0 sudo[81640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:04 compute-0 sudo[81640]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzlwunviuhjhtwgcptxtuedrwnwirsem ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789104.4488592-36405-60921918125236/async_wrapper.py j421611236538 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789104.4488592-36405-60921918125236/AnsiballZ_command.py _'
Nov 22 05:25:05 compute-0 sudo[81734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:05 compute-0 sudo[81688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.conf.new
Nov 22 05:25:05 compute-0 sudo[81688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81688]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[81741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81741]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 ansible-async_wrapper.py[81738]: Invoked with j421611236538 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789104.4488592-36405-60921918125236/AnsiballZ_command.py _
Nov 22 05:25:05 compute-0 ansible-async_wrapper.py[81791]: Starting module and watcher
Nov 22 05:25:05 compute-0 ansible-async_wrapper.py[81791]: Start watching 81792 (30)
Nov 22 05:25:05 compute-0 ansible-async_wrapper.py[81792]: Start module (81792)
Nov 22 05:25:05 compute-0 ansible-async_wrapper.py[81738]: Return async_wrapper task started.
Nov 22 05:25:05 compute-0 sudo[81766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:05 compute-0 sudo[81766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81766]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81734]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[81796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81796]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 python3[81794]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:05 compute-0 sudo[81821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.conf.new
Nov 22 05:25:05 compute-0 sudo[81821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81821]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 podman[81844]: 2025-11-22 05:25:05.395886489 +0000 UTC m=+0.049174795 container create 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:05 compute-0 systemd[1]: Started libpod-conmon-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope.
Nov 22 05:25:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:05 compute-0 podman[81844]: 2025-11-22 05:25:05.375830917 +0000 UTC m=+0.029119213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:05 compute-0 podman[81844]: 2025-11-22 05:25:05.48163288 +0000 UTC m=+0.134921136 container init 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:05 compute-0 podman[81844]: 2025-11-22 05:25:05.492288814 +0000 UTC m=+0.145577070 container start 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:05 compute-0 podman[81844]: 2025-11-22 05:25:05.495875592 +0000 UTC m=+0.149163848 container attach 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:25:05 compute-0 sudo[81886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[81886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81886]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:05 compute-0 ceph-mon[75840]: Updating compute-0:/etc/ceph/ceph.conf
Nov 22 05:25:05 compute-0 sudo[81912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.conf.new
Nov 22 05:25:05 compute-0 sudo[81912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81912]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[81937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81937]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:05 compute-0 sudo[81962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.conf.new
Nov 22 05:25:05 compute-0 sudo[81962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81962]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[81987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[81987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[81987]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[82012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 22 05:25:05 compute-0 sudo[82012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[82012]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 05:25:05 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 05:25:05 compute-0 sudo[82056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:05 compute-0 sudo[82056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[82056]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:05 compute-0 sudo[82081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config
Nov 22 05:25:05 compute-0 sudo[82081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:05 compute-0 sudo[82081]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:25:06 compute-0 quirky_einstein[81883]: 
Nov 22 05:25:06 compute-0 quirky_einstein[81883]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 05:25:06 compute-0 sudo[82106]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 systemd[1]: libpod-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope: Deactivated successfully.
Nov 22 05:25:06 compute-0 podman[81844]: 2025-11-22 05:25:06.039786971 +0000 UTC m=+0.693075227 container died 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d42708d7a283ed77d7f1ce11a43e8a1c07b35a3d55cf61fcd58d21e3c5f48ed-merged.mount: Deactivated successfully.
Nov 22 05:25:06 compute-0 sudo[82133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config
Nov 22 05:25:06 compute-0 sudo[82133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 podman[81844]: 2025-11-22 05:25:06.083148125 +0000 UTC m=+0.736436381 container remove 97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884 (image=quay.io/ceph/ceph:v18, name=quirky_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:06 compute-0 sudo[82133]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 systemd[1]: libpod-conmon-97bad42ef4e03e949b01b26d5df923b2b702c394802fc503cb25ab8c842e3884.scope: Deactivated successfully.
Nov 22 05:25:06 compute-0 ansible-async_wrapper.py[81792]: Module complete (81792)
Nov 22 05:25:06 compute-0 sudo[82168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82168]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf.new
Nov 22 05:25:06 compute-0 sudo[82193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82193]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82218]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:06 compute-0 sudo[82266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82266]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82291]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf.new
Nov 22 05:25:06 compute-0 sudo[82316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82316]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqdqmuvbmwismiekjgswbrxdewoleakq ; /usr/bin/python3'
Nov 22 05:25:06 compute-0 sudo[82368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:06 compute-0 ceph-mon[75840]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:06 compute-0 ceph-mon[75840]: Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 05:25:06 compute-0 ceph-mon[75840]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:25:06 compute-0 sudo[82390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82390]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf.new
Nov 22 05:25:06 compute-0 sudo[82415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82415]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 python3[82389]: ansible-ansible.legacy.async_status Invoked with jid=j421611236538.81738 mode=status _async_dir=/root/.ansible_async
Nov 22 05:25:06 compute-0 sudo[82368]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82440]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf.new
Nov 22 05:25:06 compute-0 sudo[82487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82487]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrusajxhjcheyuxfasujqnrysyrqaak ; /usr/bin/python3'
Nov 22 05:25:06 compute-0 sudo[82540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:06 compute-0 sudo[82535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82535]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf.new /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.conf
Nov 22 05:25:06 compute-0 sudo[82564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82564]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 05:25:06 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 05:25:06 compute-0 python3[82561]: ansible-ansible.legacy.async_status Invoked with jid=j421611236538.81738 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 05:25:06 compute-0 sudo[82540]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:06 compute-0 sudo[82589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:06 compute-0 sudo[82589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:06 compute-0 sudo[82589]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:07 compute-0 sudo[82614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 22 05:25:07 compute-0 sudo[82614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82614]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82639]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph
Nov 22 05:25:07 compute-0 sudo[82664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82664]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fillhtijgujzvlfasrkxfmbjaxpvvdhd ; /usr/bin/python3'
Nov 22 05:25:07 compute-0 sudo[82735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:07 compute-0 sudo[82690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82690]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.client.admin.keyring.new
Nov 22 05:25:07 compute-0 sudo[82740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82740]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82765]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 python3[82738]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 05:25:07 compute-0 sudo[82735]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:07 compute-0 sudo[82790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82790]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82817]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 ceph-mon[75840]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 05:25:07 compute-0 sudo[82842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.client.admin.keyring.new
Nov 22 05:25:07 compute-0 sudo[82842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82842]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:07 compute-0 sudo[82890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azvgedfpunjeonqezftiavnbszpueolk ; /usr/bin/python3'
Nov 22 05:25:07 compute-0 sudo[82890]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:07 compute-0 sudo[82941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.client.admin.keyring.new
Nov 22 05:25:07 compute-0 sudo[82941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82941]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 sudo[82966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:07 compute-0 sudo[82966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82966]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 python3[82940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:07 compute-0 sudo[82991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.client.admin.keyring.new
Nov 22 05:25:07 compute-0 sudo[82991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:07 compute-0 sudo[82991]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:07 compute-0 podman[82996]: 2025-11-22 05:25:07.968819792 +0000 UTC m=+0.054139121 container create 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:08 compute-0 systemd[1]: Started libpod-conmon-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope.
Nov 22 05:25:08 compute-0 sudo[83029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83029]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:08.026535292 +0000 UTC m=+0.111854671 container init 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:08.03228685 +0000 UTC m=+0.117606179 container start 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:08.035068907 +0000 UTC m=+0.120388256 container attach 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:07.943629568 +0000 UTC m=+0.028948917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:08 compute-0 sudo[83059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 22 05:25:08 compute-0 sudo[83059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83059]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 05:25:08 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 05:25:08 compute-0 sudo[83085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83085]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config
Nov 22 05:25:08 compute-0 sudo[83110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83110]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83135]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config
Nov 22 05:25:08 compute-0 sudo[83160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83160]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83185]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring.new
Nov 22 05:25:08 compute-0 sudo[83229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83229]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83254]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:08 compute-0 sudo[83279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83279]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 ceph-mon[75840]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:08 compute-0 ceph-mon[75840]: Updating compute-0:/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 05:25:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:25:08 compute-0 hopeful_aryabhata[83054]: 
Nov 22 05:25:08 compute-0 hopeful_aryabhata[83054]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 05:25:08 compute-0 systemd[1]: libpod-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope: Deactivated successfully.
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:08.585901525 +0000 UTC m=+0.671220874 container died 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6ba2793fd3fc8ed5f56c73c868c4f48b9dbcf5cc2cb662219fbcbed9165fede-merged.mount: Deactivated successfully.
Nov 22 05:25:08 compute-0 sudo[83304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83304]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 podman[82996]: 2025-11-22 05:25:08.630066452 +0000 UTC m=+0.715385781 container remove 1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679 (image=quay.io/ceph/ceph:v18, name=hopeful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:25:08 compute-0 systemd[1]: libpod-conmon-1c0e0545e70b9bf73ae34a5bc4adc75d14a27d1754e3baefe70f6533292c4679.scope: Deactivated successfully.
Nov 22 05:25:08 compute-0 sudo[82936]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring.new
Nov 22 05:25:08 compute-0 sudo[83338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83338]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83392]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring.new
Nov 22 05:25:08 compute-0 sudo[83417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83417]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:08 compute-0 sudo[83442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83442]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:08 compute-0 sudo[83498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azmrvtfdcmfoerzzczcluynnzskgrkom ; /usr/bin/python3'
Nov 22 05:25:08 compute-0 sudo[83498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:08 compute-0 sudo[83480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring.new
Nov 22 05:25:08 compute-0 sudo[83480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:08 compute-0 sudo[83480]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 sudo[83518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:09 compute-0 sudo[83518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 sudo[83518]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 python3[83515]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:09 compute-0 sudo[83543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-13fdadc6-d566-5465-9ac8-a148ef130da1/var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring.new /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/config/ceph.client.admin.keyring
Nov 22 05:25:09 compute-0 sudo[83543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.107372496 +0000 UTC m=+0.036514126 container create 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:25:09 compute-0 sudo[83543]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:09 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1))
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 05:25:09 compute-0 systemd[1]: Started libpod-conmon-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope.
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:09 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 22 05:25:09 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 22 05:25:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.170256968 +0000 UTC m=+0.099398628 container init 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.176428888 +0000 UTC m=+0.105570528 container start 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.180095349 +0000 UTC m=+0.109236989 container attach 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.091949312 +0000 UTC m=+0.021090982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:09 compute-0 sudo[83587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:09 compute-0 sudo[83587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 sudo[83587]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 sudo[83613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:09 compute-0 sudo[83613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 sudo[83613]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 sudo[83638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:09 compute-0 sudo[83638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 sudo[83638]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 sudo[83663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:09 compute-0 sudo[83663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 22 05:25:09 compute-0 podman[83749]: 2025-11-22 05:25:09.73465488 +0000 UTC m=+0.109334781 container create a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1937902109' entity='client.admin' 
Nov 22 05:25:09 compute-0 podman[83749]: 2025-11-22 05:25:09.652231771 +0000 UTC m=+0.026911692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:09 compute-0 systemd[1]: libpod-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope: Deactivated successfully.
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.76516458 +0000 UTC m=+0.694306220 container died 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:25:09 compute-0 systemd[1]: Started libpod-conmon-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope.
Nov 22 05:25:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7afe64eca397dd92e1e1bad0150c943e550bcd6e352595c65bd0fcc631710162-merged.mount: Deactivated successfully.
Nov 22 05:25:09 compute-0 podman[83749]: 2025-11-22 05:25:09.820927456 +0000 UTC m=+0.195607587 container init a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:09 compute-0 podman[83749]: 2025-11-22 05:25:09.828632938 +0000 UTC m=+0.203312819 container start a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:09 compute-0 podman[83749]: 2025-11-22 05:25:09.832071752 +0000 UTC m=+0.206751723 container attach a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:25:09 compute-0 eloquent_pascal[83773]: 167 167
Nov 22 05:25:09 compute-0 systemd[1]: libpod-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope: Deactivated successfully.
Nov 22 05:25:09 compute-0 podman[83549]: 2025-11-22 05:25:09.843089966 +0000 UTC m=+0.772231586 container remove 5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122 (image=quay.io/ceph/ceph:v18, name=mystifying_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:25:09 compute-0 systemd[1]: libpod-conmon-5b427a3a898f8db4730cbc917f3bf64f151f11385ddbd33c9382c7a385e63122.scope: Deactivated successfully.
Nov 22 05:25:09 compute-0 sudo[83498]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:09 compute-0 podman[83785]: 2025-11-22 05:25:09.88752708 +0000 UTC m=+0.033839463 container died a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee3c60df18fed0b31cd3ad6f6ab05830882276f8ab7cfff6a16b6da026e7c3ef-merged.mount: Deactivated successfully.
Nov 22 05:25:09 compute-0 podman[83785]: 2025-11-22 05:25:09.930503284 +0000 UTC m=+0.076815617 container remove a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:25:09 compute-0 systemd[1]: libpod-conmon-a9f2d20e9e383f5bcfd43c828b4ee4a090cd71dcb36de0efaadd510808e79061.scope: Deactivated successfully.
Nov 22 05:25:09 compute-0 systemd[1]: Reloading.
Nov 22 05:25:10 compute-0 systemd-sysv-generator[83858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:10 compute-0 systemd-rc-local-generator[83855]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:10 compute-0 ceph-mon[75840]: Deploying daemon crash.compute-0 on compute-0
Nov 22 05:25:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1937902109' entity='client.admin' 
Nov 22 05:25:10 compute-0 ansible-async_wrapper.py[81791]: Done in kid B.
Nov 22 05:25:10 compute-0 sudo[83827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timbblwfynxrjxgshdpfodrvlskryain ; /usr/bin/python3'
Nov 22 05:25:10 compute-0 sudo[83827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:10 compute-0 systemd[1]: Reloading.
Nov 22 05:25:10 compute-0 systemd-sysv-generator[83901]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:10 compute-0 systemd-rc-local-generator[83897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:10 compute-0 python3[83865]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:10 compute-0 podman[83904]: 2025-11-22 05:25:10.487800561 +0000 UTC m=+0.061733872 container create de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:25:10 compute-0 podman[83904]: 2025-11-22 05:25:10.45979858 +0000 UTC m=+0.033731971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:10 compute-0 systemd[1]: Started libpod-conmon-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope.
Nov 22 05:25:10 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 podman[83904]: 2025-11-22 05:25:10.60544187 +0000 UTC m=+0.179375221 container init de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:10 compute-0 podman[83904]: 2025-11-22 05:25:10.614084588 +0000 UTC m=+0.188017889 container start de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:25:10 compute-0 podman[83904]: 2025-11-22 05:25:10.617850381 +0000 UTC m=+0.191783732 container attach de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:25:10 compute-0 podman[83975]: 2025-11-22 05:25:10.846795976 +0000 UTC m=+0.048034143 container create c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80ac4f4a962bf82221a982dbb55362bc3fcaf41bb6c932f5b562b7708d58d3bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:10 compute-0 podman[83975]: 2025-11-22 05:25:10.913789491 +0000 UTC m=+0.115027678 container init c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:25:10 compute-0 podman[83975]: 2025-11-22 05:25:10.823552656 +0000 UTC m=+0.024790843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:10 compute-0 podman[83975]: 2025-11-22 05:25:10.924824525 +0000 UTC m=+0.126062692 container start c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:10 compute-0 bash[83975]: c4eec30b75a26e9ab6e19b62c5cf507b2f3ac178060313ddb2331256cf416708
Nov 22 05:25:10 compute-0 systemd[1]: Started Ceph crash.compute-0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:10 compute-0 sudo[83663]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1))
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event eab0bc42-8735-47c8-81ce-32474a0e4087 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d7b9a17f-2291-470e-9487-19ad1ed48200 does not exist
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2))
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 05:25:11 compute-0 sudo[84015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:11 compute-0 sudo[84015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:11 compute-0 sudo[84015]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:11 compute-0 ceph-mon[75840]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.okewxb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:25:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 22 05:25:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 22 05:25:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1809843038' entity='client.admin' 
Nov 22 05:25:11 compute-0 sudo[84040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:11 compute-0 sudo[84040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:11 compute-0 sudo[84040]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:11 compute-0 systemd[1]: libpod-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope: Deactivated successfully.
Nov 22 05:25:11 compute-0 podman[83904]: 2025-11-22 05:25:11.233121215 +0000 UTC m=+0.807054516 container died de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0c0d0d28365fe3bf6c5194d405f47c7a7e3d0e41d350230f35095cfb431fb5-merged.mount: Deactivated successfully.
Nov 22 05:25:11 compute-0 podman[83904]: 2025-11-22 05:25:11.285416555 +0000 UTC m=+0.859349856 container remove de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a (image=quay.io/ceph/ceph:v18, name=epic_solomon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:11 compute-0 systemd[1]: libpod-conmon-de385cae2d69576f42a4320c8e1c4f70cf5a44f5c38c38002d229bde1b2cbc7a.scope: Deactivated successfully.
Nov 22 05:25:11 compute-0 sudo[84070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:11 compute-0 sudo[83827]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:11 compute-0 sudo[84070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:11 compute-0 sudo[84070]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:11 compute-0 sudo[84104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:11 compute-0 sudo[84104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.408+0000 7f4e9a568640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.408+0000 7f4e9a568640 -1 AuthRegistry(0x7f4e94066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.409+0000 7f4e9a568640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.409+0000 7f4e9a568640 -1 AuthRegistry(0x7f4e9a567000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.410+0000 7f4e93fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: 2025-11-22T05:25:11.410+0000 7f4e9a568640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 22 05:25:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-crash-compute-0[83991]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 22 05:25:11 compute-0 sudo[84162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlcsjblcxxovmtlmmccfbxnkswtlvfll ; /usr/bin/python3'
Nov 22 05:25:11 compute-0 sudo[84162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:11 compute-0 python3[84166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:11 compute-0 podman[84189]: 2025-11-22 05:25:11.764674863 +0000 UTC m=+0.080399105 container create 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:25:11 compute-0 systemd[1]: Started libpod-conmon-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope.
Nov 22 05:25:11 compute-0 podman[84189]: 2025-11-22 05:25:11.727695604 +0000 UTC m=+0.043419906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.853064917 +0000 UTC m=+0.065245668 container create e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:11 compute-0 podman[84189]: 2025-11-22 05:25:11.861756267 +0000 UTC m=+0.177480489 container init 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:11 compute-0 podman[84189]: 2025-11-22 05:25:11.871202137 +0000 UTC m=+0.186926349 container start 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:11 compute-0 podman[84189]: 2025-11-22 05:25:11.876380719 +0000 UTC m=+0.192104931 container attach 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:25:11 compute-0 systemd[1]: Started libpod-conmon-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope.
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.82519927 +0000 UTC m=+0.037380021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.960040893 +0000 UTC m=+0.172221634 container init e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.967156889 +0000 UTC m=+0.179337630 container start e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.971242821 +0000 UTC m=+0.183423562 container attach e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:25:11 compute-0 determined_wilson[84240]: 167 167
Nov 22 05:25:11 compute-0 systemd[1]: libpod-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope: Deactivated successfully.
Nov 22 05:25:11 compute-0 conmon[84240]: conmon e975d9bf3dd3943f847e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope/container/memory.events
Nov 22 05:25:11 compute-0 podman[84218]: 2025-11-22 05:25:11.974272625 +0000 UTC m=+0.186453366 container died e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa0eead779fc8403c2542ab46a873511fad7f8774a3e4ff45c07d4c023111de7-merged.mount: Deactivated successfully.
Nov 22 05:25:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:12 compute-0 podman[84218]: 2025-11-22 05:25:12.028399746 +0000 UTC m=+0.240580467 container remove e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wilson, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:12 compute-0 systemd[1]: libpod-conmon-e975d9bf3dd3943f847ed396f4b1da515ad0aaf5af5d54f53354e6fd8efb7efa.scope: Deactivated successfully.
Nov 22 05:25:12 compute-0 systemd[1]: Reloading.
Nov 22 05:25:12 compute-0 systemd-rc-local-generator[84285]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:12 compute-0 systemd-sysv-generator[84291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:12 compute-0 ceph-mon[75840]: Deploying daemon mgr.compute-0.okewxb on compute-0
Nov 22 05:25:12 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1809843038' entity='client.admin' 
Nov 22 05:25:12 compute-0 systemd[1]: Reloading.
Nov 22 05:25:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 22 05:25:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 05:25:12 compute-0 systemd-rc-local-generator[84339]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:12 compute-0 systemd-sysv-generator[84348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:12 compute-0 systemd[1]: Starting Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:12 compute-0 podman[84402]: 2025-11-22 05:25:12.988317039 +0000 UTC m=+0.065991808 container create 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6/merged/var/lib/ceph/mgr/ceph-compute-0.okewxb supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:13 compute-0 podman[84402]: 2025-11-22 05:25:12.957355667 +0000 UTC m=+0.035030506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:13 compute-0 podman[84402]: 2025-11-22 05:25:13.059371726 +0000 UTC m=+0.137046475 container init 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:25:13 compute-0 podman[84402]: 2025-11-22 05:25:13.070169174 +0000 UTC m=+0.147843943 container start 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:25:13 compute-0 bash[84402]: 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6
Nov 22 05:25:13 compute-0 systemd[1]: Started Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:13 compute-0 sudo[84104]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: pidfile_write: ignore empty --pid-file
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2))
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 86683be6-d313-416c-8d8b-8d87f0b74c48 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:13 compute-0 ceph-mon[75840]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 05:25:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 22 05:25:13 compute-0 sudo[84446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:13 compute-0 flamboyant_euler[84231]: set require_min_compat_client to mimic
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 22 05:25:13 compute-0 sudo[84446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 sudo[84446]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'alerts'
Nov 22 05:25:13 compute-0 systemd[1]: libpod-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope: Deactivated successfully.
Nov 22 05:25:13 compute-0 podman[84189]: 2025-11-22 05:25:13.262659314 +0000 UTC m=+1.578383516 container died 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-965051e6b0ba992364b753327119433f9c7621adceb844391dd30768c2fdca68-merged.mount: Deactivated successfully.
Nov 22 05:25:13 compute-0 podman[84189]: 2025-11-22 05:25:13.308760154 +0000 UTC m=+1.624484356 container remove 1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6 (image=quay.io/ceph/ceph:v18, name=flamboyant_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:13 compute-0 sudo[84472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:25:13 compute-0 sudo[84472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 sudo[84472]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 sudo[84162]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 systemd[1]: libpod-conmon-1d51fc61e70f578e4f1462f5a0980fca45e047d1fda7fd9be611b11281a21fb6.scope: Deactivated successfully.
Nov 22 05:25:13 compute-0 sudo[84507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:13 compute-0 sudo[84507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 sudo[84507]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 sudo[84532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:13 compute-0 sudo[84532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 sudo[84532]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 sudo[84557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:13 compute-0 sudo[84557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 sudo[84557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'balancer'
Nov 22 05:25:13 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:13.560+0000 7f6cc6f38140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 05:25:13 compute-0 sudo[84582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:25:13 compute-0 sudo[84582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 2 completed events
Nov 22 05:25:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:25:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:13 compute-0 sudo[84630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhnzlhqfxnpiddcjpantqughrxdxamn ; /usr/bin/python3'
Nov 22 05:25:13 compute-0 sudo[84630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:25:13 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'cephadm'
Nov 22 05:25:13 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:13.823+0000 7f6cc6f38140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 05:25:13 compute-0 python3[84633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:14 compute-0 podman[84674]: 2025-11-22 05:25:14.056375372 +0000 UTC m=+0.095304305 container create abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:14 compute-0 podman[84674]: 2025-11-22 05:25:14.005656635 +0000 UTC m=+0.044585568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:14 compute-0 systemd[1]: Started libpod-conmon-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope.
Nov 22 05:25:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:14 compute-0 podman[84674]: 2025-11-22 05:25:14.175627756 +0000 UTC m=+0.214556679 container init abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:25:14 compute-0 podman[84674]: 2025-11-22 05:25:14.183616166 +0000 UTC m=+0.222545089 container start abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:25:14 compute-0 podman[84674]: 2025-11-22 05:25:14.20951638 +0000 UTC m=+0.248445303 container attach abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:14 compute-0 podman[84716]: 2025-11-22 05:25:14.236726069 +0000 UTC m=+0.055771337 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:25:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4208487957' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 05:25:14 compute-0 ceph-mon[75840]: osdmap e3: 0 total, 0 up, 0 in
Nov 22 05:25:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 podman[84716]: 2025-11-22 05:25:14.360007704 +0000 UTC m=+0.179052962 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:25:14 compute-0 sudo[84582]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 461bd35b-0122-40d8-a4b2-3bf20812d1e6 does not exist
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6138f3e6-9db8-482c-b0ad-466b76d84df1 does not exist
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5dffe940-c2f7-4585-9346-d7bce402b49d does not exist
Nov 22 05:25:14 compute-0 sudo[84823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:25:14 compute-0 sudo[84823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:14 compute-0 sudo[84823]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:14 compute-0 sudo[84849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:14 compute-0 sudo[84849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:14 compute-0 sudo[84849]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:14 compute-0 sudo[84851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:25:14 compute-0 sudo[84851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:14 compute-0 sudo[84851]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 05:25:14 compute-0 sudo[84898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:14 compute-0 sudo[84898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 05:25:14 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 05:25:14 compute-0 sudo[84898]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[84924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:15 compute-0 sudo[84924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[84924]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[84928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:15 compute-0 sudo[84928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[84928]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[84973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:15 compute-0 sudo[84973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[84973]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[84979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 05:25:15 compute-0 sudo[84979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[85024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:15 compute-0 sudo[85024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[85024]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[85056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:15 compute-0 sudo[85056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 ceph-mon[75840]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:15 compute-0 sudo[84979]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 objective_brahmagupta[84709]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 05:25:15 compute-0 objective_brahmagupta[84709]: Scheduled mon update...
Nov 22 05:25:15 compute-0 objective_brahmagupta[84709]: Scheduled mgr update...
Nov 22 05:25:15 compute-0 objective_brahmagupta[84709]: Scheduled osd.default_drive_group update...
Nov 22 05:25:15 compute-0 systemd[1]: libpod-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope: Deactivated successfully.
Nov 22 05:25:15 compute-0 podman[84674]: 2025-11-22 05:25:15.458465534 +0000 UTC m=+1.497394447 container died abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-04354b87a3686e51bb534a91e3247f15fb6ee159fc0ca902709303e51e96f10f-merged.mount: Deactivated successfully.
Nov 22 05:25:15 compute-0 podman[84674]: 2025-11-22 05:25:15.517604012 +0000 UTC m=+1.556532935 container remove abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c (image=quay.io/ceph/ceph:v18, name=objective_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:15 compute-0 systemd[1]: libpod-conmon-abb8767ba733e10e7deedf0d1a772ba8ae7420651b070debe1d51650e2ee792c.scope: Deactivated successfully.
Nov 22 05:25:15 compute-0 sudo[84630]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.555775183 +0000 UTC m=+0.072508038 container create f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:15 compute-0 systemd[1]: Started libpod-conmon-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope.
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.525614433 +0000 UTC m=+0.042347288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.640600899 +0000 UTC m=+0.157333814 container init f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.656088145 +0000 UTC m=+0.172820991 container start f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:25:15 compute-0 optimistic_lovelace[85146]: 167 167
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.660707552 +0000 UTC m=+0.177440457 container attach f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:15 compute-0 systemd[1]: libpod-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope: Deactivated successfully.
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.66207771 +0000 UTC m=+0.178810575 container died f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfcdd8fb98dadc96c135f7fc7fbc880eea397af5219521ba7f70783e06176f03-merged.mount: Deactivated successfully.
Nov 22 05:25:15 compute-0 podman[85122]: 2025-11-22 05:25:15.710497704 +0000 UTC m=+0.227230519 container remove f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lovelace, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:15 compute-0 systemd[1]: libpod-conmon-f37943ac63d5ecb5858576bdc5926a6833570c698af1bd8b7aee3af09f931315.scope: Deactivated successfully.
Nov 22 05:25:15 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'crash'
Nov 22 05:25:15 compute-0 sudo[85056]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 05:25:15 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 05:25:15 compute-0 sudo[85192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsofhuiujwuifdgqevvcxcyujinmcvhe ; /usr/bin/python3'
Nov 22 05:25:15 compute-0 sudo[85192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:15 compute-0 sudo[85191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:15 compute-0 sudo[85191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[85191]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:15 compute-0 sudo[85219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:15 compute-0 sudo[85219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:15 compute-0 sudo[85219]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 python3[85215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:16 compute-0 sudo[85244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:16 compute-0 sudo[85244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 sudo[85244]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 ceph-mgr[84421]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:25:16 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'dashboard'
Nov 22 05:25:16 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:16.044+0000 7f6cc6f38140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.070700563 +0000 UTC m=+0.050134962 container create 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:25:16 compute-0 sudo[85277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:16 compute-0 sudo[85277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 systemd[1]: Started libpod-conmon-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope.
Nov 22 05:25:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.05096417 +0000 UTC m=+0.030398609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.160036823 +0000 UTC m=+0.139471222 container init 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.167971551 +0000 UTC m=+0.147405940 container start 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.171693584 +0000 UTC m=+0.151128203 container attach 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:25:16 compute-0 ceph-mon[75840]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 05:25:16 compute-0 ceph-mon[75840]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mscchl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 05:25:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.329036147 +0000 UTC m=+0.057864255 container create 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 05:25:16 compute-0 systemd[1]: Started libpod-conmon-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope.
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.302148486 +0000 UTC m=+0.030976594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.414377087 +0000 UTC m=+0.143205185 container init 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.42102206 +0000 UTC m=+0.149850128 container start 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:16 compute-0 objective_lovelace[85348]: 167 167
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.424233678 +0000 UTC m=+0.153061756 container attach 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:16 compute-0 systemd[1]: libpod-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope: Deactivated successfully.
Nov 22 05:25:16 compute-0 conmon[85348]: conmon 7cd621ba9bed9821618a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope/container/memory.events
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.425555335 +0000 UTC m=+0.154383403 container died 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:16 compute-0 podman[85331]: 2025-11-22 05:25:16.465276759 +0000 UTC m=+0.194104827 container remove 7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 05:25:16 compute-0 systemd[1]: libpod-conmon-7cd621ba9bed9821618a1f1f487ea5eb0e62e9fb3dadc9cef7ae78b398d45030.scope: Deactivated successfully.
Nov 22 05:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-faf45bc56fee1a717a6ebfa86ff8bbb8b2a3246e475600dcf96143f1a5dc69a4-merged.mount: Deactivated successfully.
Nov 22 05:25:16 compute-0 sudo[85277]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:16 compute-0 sudo[85385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:16 compute-0 sudo[85385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 sudo[85385]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 sudo[85410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:16 compute-0 sudo[85410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 sudo[85410]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 sudo[85435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:16 compute-0 sudo[85435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 sudo[85435]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 05:25:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751463261' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:25:16 compute-0 zealous_mcnulty[85310]: 
Nov 22 05:25:16 compute-0 zealous_mcnulty[85310]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-22T05:23:54.066984+0000","services":{}},"progress_events":{}}
Nov 22 05:25:16 compute-0 sudo[85460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:25:16 compute-0 sudo[85460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:16 compute-0 systemd[1]: libpod-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope: Deactivated successfully.
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.820293675 +0000 UTC m=+0.799728094 container died 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:25:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e3a5b3af0fd6d302bbb7f7e1b9703f942fc2002ea6fb9904a3c5d4c8005adc5-merged.mount: Deactivated successfully.
Nov 22 05:25:16 compute-0 podman[85268]: 2025-11-22 05:25:16.866060696 +0000 UTC m=+0.845495085 container remove 3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722 (image=quay.io/ceph/ceph:v18, name=zealous_mcnulty, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:16 compute-0 systemd[1]: libpod-conmon-3577e0f6ca9728abdea4bc659e0c99fd09ca373040f56e24a6b9ffa584484722.scope: Deactivated successfully.
Nov 22 05:25:16 compute-0 sudo[85192]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Added host compute-0
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Saving service mon spec with placement compute-0
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Saving service mgr spec with placement compute-0
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Saving service osd.default_drive_group spec with placement compute-0
Nov 22 05:25:17 compute-0 ceph-mon[75840]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Reconfiguring mgr.compute-0.mscchl (unknown last config time)...
Nov 22 05:25:17 compute-0 ceph-mon[75840]: Reconfiguring daemon mgr.compute-0.mscchl on compute-0
Nov 22 05:25:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/751463261' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:25:17 compute-0 podman[85570]: 2025-11-22 05:25:17.291624295 +0000 UTC m=+0.049683580 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:25:17 compute-0 podman[85570]: 2025-11-22 05:25:17.411064864 +0000 UTC m=+0.169124149 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:17 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'devicehealth'
Nov 22 05:25:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:17 compute-0 sudo[85460]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mgr[84421]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:25:17 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 05:25:17 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:17.775+0000 7f6cc6f38140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ece46ffd-0618-4823-99e7-639fda0645c6 does not exist
Nov 22 05:25:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 05:25:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:17 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1))
Nov 22 05:25:17 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 05:25:17 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 05:25:17 compute-0 sudo[85656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:17 compute-0 sudo[85656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:17 compute-0 sudo[85656]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:17 compute-0 sudo[85681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:17 compute-0 sudo[85681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:17 compute-0 sudo[85681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:18 compute-0 sudo[85706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:18 compute-0 sudo[85706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:18 compute-0 sudo[85706]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:18 compute-0 sudo[85731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --name mgr.compute-0.okewxb --force --tcp-ports 8765
Nov 22 05:25:18 compute-0 sudo[85731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 05:25:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 05:25:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]:   from numpy import show_config as show_numpy_config
Nov 22 05:25:18 compute-0 ceph-mgr[84421]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:25:18 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'influx'
Nov 22 05:25:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:18.310+0000 7f6cc6f38140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 05:25:18 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:18 compute-0 ceph-mgr[84421]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:25:18 compute-0 ceph-mgr[84421]: mgr[py] Loading python module 'insights'
Nov 22 05:25:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb[84417]: 2025-11-22T05:25:18.539+0000 7f6cc6f38140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 05:25:18 compute-0 podman[85821]: 2025-11-22 05:25:18.655182965 +0000 UTC m=+0.143468573 container died 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a7783f027aa044fb3bdfc346408e6093dcf6efbc048a7dafc4c7701422afcf6-merged.mount: Deactivated successfully.
Nov 22 05:25:18 compute-0 podman[85821]: 2025-11-22 05:25:18.730071707 +0000 UTC m=+0.218357305 container remove 2fae96e2a944e95fbec070b602c757a0758c5401c59f731679dd02e98d1c91c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:18 compute-0 bash[85821]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-okewxb
Nov 22 05:25:18 compute-0 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Main process exited, code=exited, status=143/n/a
Nov 22 05:25:18 compute-0 ceph-mon[75840]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:18 compute-0 ceph-mon[75840]: Removing daemon mgr.compute-0.okewxb from compute-0 -- ports [8765]
Nov 22 05:25:18 compute-0 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Failed with result 'exit-code'.
Nov 22 05:25:18 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.okewxb for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:18 compute-0 systemd[1]: ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.okewxb.service: Consumed 6.502s CPU time.
Nov 22 05:25:18 compute-0 systemd[1]: Reloading.
Nov 22 05:25:19 compute-0 systemd-sysv-generator[85912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:19 compute-0 systemd-rc-local-generator[85908]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:19 compute-0 sudo[85731]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.okewxb
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.okewxb
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"} v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]': finished
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1))
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 2231a612-6d16-4523-aa02-dbdc0d490ff6 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ca36ffb-4c3f-4fda-8720-34be37d72804 does not exist
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:19 compute-0 sudo[85918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:19 compute-0 sudo[85918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:19 compute-0 sudo[85918]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:19 compute-0 sudo[85943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:19 compute-0 sudo[85943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:19 compute-0 sudo[85943]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:19 compute-0 sudo[85968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:19 compute-0 sudo[85968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:19 compute-0 sudo[85968]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:19 compute-0 sudo[85993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:25:19 compute-0 sudo[85993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:19 compute-0 ceph-mon[75840]: Removing key for mgr.compute-0.okewxb
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.okewxb"}]': finished
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:25:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:19 compute-0 podman[86058]: 2025-11-22 05:25:19.968073289 +0000 UTC m=+0.060918918 container create a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:20 compute-0 systemd[1]: Started libpod-conmon-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope.
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:19.946046873 +0000 UTC m=+0.038892582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:20.066192052 +0000 UTC m=+0.159037691 container init a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:20.075746104 +0000 UTC m=+0.168591743 container start a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:20.079494007 +0000 UTC m=+0.172339676 container attach a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:20 compute-0 magical_yonath[86074]: 167 167
Nov 22 05:25:20 compute-0 systemd[1]: libpod-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope: Deactivated successfully.
Nov 22 05:25:20 compute-0 conmon[86074]: conmon a47f0a5da247be23b820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope/container/memory.events
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:20.083602241 +0000 UTC m=+0.176447870 container died a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-de41b1af5d169b68dde5312eabbe6b97d46704d984129965c06b9edc6b727d7a-merged.mount: Deactivated successfully.
Nov 22 05:25:20 compute-0 podman[86058]: 2025-11-22 05:25:20.121175945 +0000 UTC m=+0.214021564 container remove a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:20 compute-0 systemd[1]: libpod-conmon-a47f0a5da247be23b82092ae08001125d5629eb5936549ef8746e43edc1eb3b7.scope: Deactivated successfully.
Nov 22 05:25:20 compute-0 podman[86097]: 2025-11-22 05:25:20.296183235 +0000 UTC m=+0.044476746 container create 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:25:20 compute-0 systemd[1]: Started libpod-conmon-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope.
Nov 22 05:25:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:20 compute-0 podman[86097]: 2025-11-22 05:25:20.2774939 +0000 UTC m=+0.025787441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:20 compute-0 podman[86097]: 2025-11-22 05:25:20.387183841 +0000 UTC m=+0.135477352 container init 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:25:20 compute-0 podman[86097]: 2025-11-22 05:25:20.401197707 +0000 UTC m=+0.149491218 container start 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:25:20 compute-0 podman[86097]: 2025-11-22 05:25:20.40530789 +0000 UTC m=+0.153601401 container attach 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:21 compute-0 ceph-mon[75840]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:21 compute-0 ecstatic_saha[86114]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:25:21 compute-0 ecstatic_saha[86114]: --> relative data size: 1.0
Nov 22 05:25:21 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:21 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a5feb48b-30da-4436-abf9-8885d26e1de8
Nov 22 05:25:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"} v 0) v1
Nov 22 05:25:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]: dispatch
Nov 22 05:25:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 22 05:25:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]': finished
Nov 22 05:25:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 22 05:25:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 22 05:25:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:21 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:22 compute-0 lvm[86176]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 05:25:22 compute-0 lvm[86176]: VG ceph_vg0 finished
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 22 05:25:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]: dispatch
Nov 22 05:25:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2205272410' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8"}]': finished
Nov 22 05:25:22 compute-0 ceph-mon[75840]: osdmap e4: 1 total, 0 up, 1 in
Nov 22 05:25:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 05:25:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843832395' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]:  stderr: got monmap epoch 1
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: --> Creating keyring file for osd.0
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 22 05:25:22 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a5feb48b-30da-4436-abf9-8885d26e1de8 --setuser ceph --setgroup ceph
Nov 22 05:25:23 compute-0 ceph-mon[75840]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3843832395' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:23 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 3 completed events
Nov 22 05:25:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:25:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 05:25:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 05:25:24 compute-0 ceph-mon[75840]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:24 compute-0 ceph-mon[75840]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 05:25:24 compute-0 ceph-mon[75840]: Cluster is now healthy
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:22.621+0000 7fb47d0b6740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1fb2d706-3ef2-43d5-9448-a482f97db695
Nov 22 05:25:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"} v 0) v1
Nov 22 05:25:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]: dispatch
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]': finished
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 22 05:25:25 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:25 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:25 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:25 compute-0 lvm[87117]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:25:25 compute-0 lvm[87117]: VG ceph_vg1 finished
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 22 05:25:25 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 22 05:25:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 05:25:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344868377' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]:  stderr: got monmap epoch 1
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: --> Creating keyring file for osd.1
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 22 05:25:26 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 1fb2d706-3ef2-43d5-9448-a482f97db695 --setuser ceph --setgroup ceph
Nov 22 05:25:26 compute-0 ceph-mon[75840]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]: dispatch
Nov 22 05:25:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1903465149' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695"}]': finished
Nov 22 05:25:26 compute-0 ceph-mon[75840]: osdmap e5: 2 total, 0 up, 2 in
Nov 22 05:25:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2344868377' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:28 compute-0 ceph-mon[75840]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:26.585+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:26.586+0000 7eff38a46740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 315eef4c-16c8-4117-80ec-ccdc45d85649
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"} v 0) v1
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]': finished
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:29 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:29 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:29 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:29 compute-0 lvm[88060]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:25:29 compute-0 lvm[88060]: VG ceph_vg2 finished
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 05:25:29 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/196516305' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649"}]': finished
Nov 22 05:25:29 compute-0 ceph-mon[75840]: osdmap e6: 3 total, 0 up, 3 in
Nov 22 05:25:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:29 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 22 05:25:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 05:25:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174537781' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:30 compute-0 ecstatic_saha[86114]:  stderr: got monmap epoch 1
Nov 22 05:25:30 compute-0 ecstatic_saha[86114]: --> Creating keyring file for osd.2
Nov 22 05:25:30 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 22 05:25:30 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 22 05:25:30 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 315eef4c-16c8-4117-80ec-ccdc45d85649 --setuser ceph --setgroup ceph
Nov 22 05:25:30 compute-0 ceph-mon[75840]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2174537781' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 05:25:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:30.383+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]:  stderr: 2025-11-22T05:25:30.384+0000 7f674730a740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 22 05:25:32 compute-0 ceph-mon[75840]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 22 05:25:32 compute-0 ecstatic_saha[86114]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 22 05:25:33 compute-0 systemd[1]: libpod-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Deactivated successfully.
Nov 22 05:25:33 compute-0 podman[86097]: 2025-11-22 05:25:33.040977401 +0000 UTC m=+12.789270932 container died 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:33 compute-0 systemd[1]: libpod-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Consumed 6.505s CPU time.
Nov 22 05:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-22dbecf214f8286f8464db9f903701f832a95eee43f18f8e4e8d901f095db474-merged.mount: Deactivated successfully.
Nov 22 05:25:33 compute-0 podman[86097]: 2025-11-22 05:25:33.114102703 +0000 UTC m=+12.862396214 container remove 5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_saha, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:33 compute-0 systemd[1]: libpod-conmon-5c009ccdeaf855d2e2e8867314029c5d9d1db6c503c95d2dd665dc2488e09a67.scope: Deactivated successfully.
Nov 22 05:25:33 compute-0 sudo[85993]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:33 compute-0 sudo[88984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:33 compute-0 sudo[88984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:33 compute-0 sudo[88984]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:33 compute-0 sudo[89009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:33 compute-0 sudo[89009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:33 compute-0 sudo[89009]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:33 compute-0 sudo[89034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:33 compute-0 sudo[89034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:33 compute-0 sudo[89034]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:33 compute-0 sudo[89059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:25:33 compute-0 sudo[89059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:33 compute-0 podman[89123]: 2025-11-22 05:25:33.905397462 +0000 UTC m=+0.061678302 container create c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:25:33 compute-0 systemd[1]: Started libpod-conmon-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope.
Nov 22 05:25:33 compute-0 podman[89123]: 2025-11-22 05:25:33.881768478 +0000 UTC m=+0.038049328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:33 compute-0 podman[89123]: 2025-11-22 05:25:33.999843893 +0000 UTC m=+0.156124793 container init c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 05:25:34 compute-0 podman[89123]: 2025-11-22 05:25:34.012817031 +0000 UTC m=+0.169097841 container start c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:34 compute-0 podman[89123]: 2025-11-22 05:25:34.016536039 +0000 UTC m=+0.172816949 container attach c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:25:34 compute-0 vigilant_hermann[89139]: 167 167
Nov 22 05:25:34 compute-0 systemd[1]: libpod-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope: Deactivated successfully.
Nov 22 05:25:34 compute-0 conmon[89139]: conmon c8c08c02c2b201891c07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope/container/memory.events
Nov 22 05:25:34 compute-0 podman[89123]: 2025-11-22 05:25:34.019656027 +0000 UTC m=+0.175936857 container died c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8897d407c1b10b2922263a45955096034b26325079e37ff8216bd61c0832b2da-merged.mount: Deactivated successfully.
Nov 22 05:25:34 compute-0 podman[89123]: 2025-11-22 05:25:34.061871505 +0000 UTC m=+0.218152325 container remove c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:34 compute-0 systemd[1]: libpod-conmon-c8c08c02c2b201891c078c0707680af0302573aea21d578a0b5888966d3f07a5.scope: Deactivated successfully.
Nov 22 05:25:34 compute-0 podman[89160]: 2025-11-22 05:25:34.274378242 +0000 UTC m=+0.070740197 container create 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:25:34 compute-0 systemd[1]: Started libpod-conmon-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope.
Nov 22 05:25:34 compute-0 podman[89160]: 2025-11-22 05:25:34.245121901 +0000 UTC m=+0.041483886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:34 compute-0 podman[89160]: 2025-11-22 05:25:34.381466142 +0000 UTC m=+0.177828157 container init 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:34 compute-0 podman[89160]: 2025-11-22 05:25:34.396081711 +0000 UTC m=+0.192443666 container start 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:25:34 compute-0 podman[89160]: 2025-11-22 05:25:34.400291714 +0000 UTC m=+0.196653729 container attach 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:25:34 compute-0 ceph-mon[75840]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:35 compute-0 blissful_gauss[89176]: {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     "0": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "devices": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "/dev/loop3"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             ],
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_name": "ceph_lv0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_size": "21470642176",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "name": "ceph_lv0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "tags": {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_name": "ceph",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.crush_device_class": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.encrypted": "0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_id": "0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.vdo": "0"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             },
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "vg_name": "ceph_vg0"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         }
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     ],
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     "1": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "devices": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "/dev/loop4"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             ],
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_name": "ceph_lv1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_size": "21470642176",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "name": "ceph_lv1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "tags": {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_name": "ceph",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.crush_device_class": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.encrypted": "0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_id": "1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.vdo": "0"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             },
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "vg_name": "ceph_vg1"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         }
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     ],
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     "2": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "devices": [
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "/dev/loop5"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             ],
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_name": "ceph_lv2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_size": "21470642176",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "name": "ceph_lv2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "tags": {
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.cluster_name": "ceph",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.crush_device_class": "",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.encrypted": "0",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osd_id": "2",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:                 "ceph.vdo": "0"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             },
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "type": "block",
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:             "vg_name": "ceph_vg2"
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:         }
Nov 22 05:25:35 compute-0 blissful_gauss[89176]:     ]
Nov 22 05:25:35 compute-0 blissful_gauss[89176]: }
Nov 22 05:25:35 compute-0 systemd[1]: libpod-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope: Deactivated successfully.
Nov 22 05:25:35 compute-0 podman[89160]: 2025-11-22 05:25:35.207770552 +0000 UTC m=+1.004132557 container died 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff62b6897b36887fda6d198aecb253d4b2cebd2bf1815c25ef14a07c841d06d-merged.mount: Deactivated successfully.
Nov 22 05:25:35 compute-0 podman[89160]: 2025-11-22 05:25:35.293212271 +0000 UTC m=+1.089574196 container remove 0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:25:35 compute-0 systemd[1]: libpod-conmon-0ff34ee650bd7b01c494ce568f9903def0068696de9146d14301580ce07cba9d.scope: Deactivated successfully.
Nov 22 05:25:35 compute-0 sudo[89059]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 22 05:25:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 05:25:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:35 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:35 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 22 05:25:35 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 22 05:25:35 compute-0 sudo[89196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:35 compute-0 sudo[89196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:35 compute-0 sudo[89196]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:35 compute-0 sudo[89221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:35 compute-0 sudo[89221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:35 compute-0 sudo[89221]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:35 compute-0 sudo[89246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:35 compute-0 sudo[89246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:35 compute-0 sudo[89246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:35 compute-0 sudo[89271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:35 compute-0 sudo[89271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 05:25:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.125657084 +0000 UTC m=+0.061477176 container create a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:36 compute-0 systemd[1]: Started libpod-conmon-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope.
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.094456472 +0000 UTC m=+0.030276624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.215936915 +0000 UTC m=+0.151757057 container init a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.224070921 +0000 UTC m=+0.159890973 container start a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.227764237 +0000 UTC m=+0.163584329 container attach a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:25:36 compute-0 silly_clarke[89353]: 167 167
Nov 22 05:25:36 compute-0 systemd[1]: libpod-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope: Deactivated successfully.
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.230033329 +0000 UTC m=+0.165853371 container died a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f971d4164a4ab3e48fa2402aa8a9ebbd7205abe12c8828339e281f91b55a686-merged.mount: Deactivated successfully.
Nov 22 05:25:36 compute-0 podman[89337]: 2025-11-22 05:25:36.267116915 +0000 UTC m=+0.202936977 container remove a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:25:36 compute-0 systemd[1]: libpod-conmon-a2f93f7e615578d97feaa937a1214b7e27d8f78199426e73d0aed81a7eb87584.scope: Deactivated successfully.
Nov 22 05:25:36 compute-0 podman[89384]: 2025-11-22 05:25:36.610927914 +0000 UTC m=+0.060636569 container create 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:36 compute-0 systemd[1]: Started libpod-conmon-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope.
Nov 22 05:25:36 compute-0 podman[89384]: 2025-11-22 05:25:36.592282677 +0000 UTC m=+0.041991342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:36 compute-0 podman[89384]: 2025-11-22 05:25:36.715602718 +0000 UTC m=+0.165311383 container init 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:25:36 compute-0 podman[89384]: 2025-11-22 05:25:36.733060267 +0000 UTC m=+0.182768922 container start 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:25:36 compute-0 podman[89384]: 2025-11-22 05:25:36.737499736 +0000 UTC m=+0.187208391 container attach 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:36 compute-0 ceph-mon[75840]: Deploying daemon osd.0 on compute-0
Nov 22 05:25:36 compute-0 ceph-mon[75840]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:37 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 05:25:37 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]:                             [--no-systemd] [--no-tmpfs]
Nov 22 05:25:37 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test[89400]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 05:25:37 compute-0 systemd[1]: libpod-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope: Deactivated successfully.
Nov 22 05:25:37 compute-0 podman[89384]: 2025-11-22 05:25:37.370227386 +0000 UTC m=+0.819936031 container died 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6d866b43980e1ef8dbf04a3d8c65dfd83142910d94a74170bf2e083e0deedb8-merged.mount: Deactivated successfully.
Nov 22 05:25:37 compute-0 podman[89384]: 2025-11-22 05:25:37.447833428 +0000 UTC m=+0.897542073 container remove 8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:25:37 compute-0 systemd[1]: libpod-conmon-8fcd061e2d397efbdda3b5fcb9b9c579fc1244f8c65667ed4a2923366f3bfe79.scope: Deactivated successfully.
Nov 22 05:25:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:37 compute-0 systemd[1]: Reloading.
Nov 22 05:25:37 compute-0 systemd-rc-local-generator[89459]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:37 compute-0 systemd-sysv-generator[89464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:38 compute-0 systemd[1]: Reloading.
Nov 22 05:25:38 compute-0 systemd-rc-local-generator[89502]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:38 compute-0 systemd-sysv-generator[89506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:38 compute-0 systemd[1]: Starting Ceph osd.0 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:38 compute-0 podman[89564]: 2025-11-22 05:25:38.64411681 +0000 UTC m=+0.064436859 container create d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:25:38 compute-0 podman[89564]: 2025-11-22 05:25:38.615273862 +0000 UTC m=+0.035593911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:38 compute-0 podman[89564]: 2025-11-22 05:25:38.743790236 +0000 UTC m=+0.164110335 container init d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:38 compute-0 podman[89564]: 2025-11-22 05:25:38.754991088 +0000 UTC m=+0.175311127 container start d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:25:38 compute-0 podman[89564]: 2025-11-22 05:25:38.758953913 +0000 UTC m=+0.179274012 container attach d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:38 compute-0 ceph-mon[75840]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:39 compute-0 bash[89564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 05:25:40 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate[89580]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 05:25:40 compute-0 bash[89564]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 05:25:40 compute-0 systemd[1]: libpod-d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048.scope: Deactivated successfully.
Nov 22 05:25:40 compute-0 systemd[1]: libpod-d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048.scope: Consumed 1.293s CPU time.
Nov 22 05:25:40 compute-0 podman[89564]: 2025-11-22 05:25:40.034090417 +0000 UTC m=+1.454410506 container died d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0bf84429d2904bbe088a15242e7308fb84fdf1125c44b0e20e9096939c97863-merged.mount: Deactivated successfully.
Nov 22 05:25:40 compute-0 podman[89564]: 2025-11-22 05:25:40.098791883 +0000 UTC m=+1.519111912 container remove d16c6186ab0b053db7a802522824383efde15021e977c357e5ca51624dd55048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0-activate, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:25:40 compute-0 podman[89760]: 2025-11-22 05:25:40.400562229 +0000 UTC m=+0.066981429 container create 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:25:40 compute-0 podman[89760]: 2025-11-22 05:25:40.37263248 +0000 UTC m=+0.039051690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efaba00efb5b36b6dd82e59c5d21bec817d6145f9503834429db70c5a3b0e197/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:40 compute-0 podman[89760]: 2025-11-22 05:25:40.484184769 +0000 UTC m=+0.150603999 container init 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:25:40 compute-0 podman[89760]: 2025-11-22 05:25:40.495462594 +0000 UTC m=+0.161881794 container start 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:25:40 compute-0 bash[89760]: 49ecd6cb38e9d2a0db336440a185d6960eb619a50337350cc6a9b22a3d82abe3
Nov 22 05:25:40 compute-0 systemd[1]: Started Ceph osd.0 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:40 compute-0 sudo[89271]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:40 compute-0 ceph-osd[89779]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:25:40 compute-0 ceph-osd[89779]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 05:25:40 compute-0 ceph-osd[89779]: pidfile_write: ignore empty --pid-file
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464d195800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 05:25:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 22 05:25:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 05:25:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:40 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:40 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 22 05:25:40 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 22 05:25:40 compute-0 sudo[89792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:40 compute-0 sudo[89792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:40 compute-0 sudo[89792]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:40 compute-0 sudo[89817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:40 compute-0 sudo[89817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:40 compute-0 sudo[89817]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:40 compute-0 sudo[89842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:40 compute-0 sudo[89842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:40 compute-0 sudo[89842]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:40 compute-0 ceph-osd[89779]: bdev(0x56464c35d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 05:25:40 compute-0 ceph-mon[75840]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 05:25:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:40 compute-0 sudo[89867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:40 compute-0 sudo[89867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 22 05:25:41 compute-0 ceph-osd[89779]: load: jerasure load: lrc 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.32222719 +0000 UTC m=+0.053945859 container create 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:25:41 compute-0 systemd[1]: Started libpod-conmon-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope.
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.294251889 +0000 UTC m=+0.025970558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.437660701 +0000 UTC m=+0.169379440 container init 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.449302448 +0000 UTC m=+0.181021107 container start 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.453057946 +0000 UTC m=+0.184776615 container attach 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:41 compute-0 ecstatic_chatterjee[89959]: 167 167
Nov 22 05:25:41 compute-0 systemd[1]: libpod-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope: Deactivated successfully.
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.459621913 +0000 UTC m=+0.191340582 container died 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a79437b7770937595ea32d63bb7da1238bbc51ceea95732d691e6e52ca0f62-merged.mount: Deactivated successfully.
Nov 22 05:25:41 compute-0 podman[89939]: 2025-11-22 05:25:41.508916634 +0000 UTC m=+0.240635303 container remove 86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:41 compute-0 systemd[1]: libpod-conmon-86cd62821f035c11a9135fab87b19497e89cf223bd517a80d93ddf3aa4ce0173.scope: Deactivated successfully.
Nov 22 05:25:41 compute-0 ceph-osd[89779]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 05:25:41 compute-0 ceph-osd[89779]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d216c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs mount
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs mount shared_bdev_used = 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Git sha 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DB SUMMARY
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DB Session ID:  CK3ECG8VCYUAVQEDRRWE
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                     Options.env: 0x56464d1e7d50
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                Options.info_log: 0x56464c3e47e0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.write_buffer_manager: 0x56464d2f0460
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Compression algorithms supported:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4180)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3f0979e-ad59-462b-9b2c-5d2aa2e48d80
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141678458, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141678716, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: freelist init
Nov 22 05:25:41 compute-0 ceph-osd[89779]: freelist _read_cfg
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs umount
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 05:25:41 compute-0 podman[90186]: 2025-11-22 05:25:41.881641962 +0000 UTC m=+0.066744651 container create 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 22 05:25:41 compute-0 ceph-mon[75840]: Deploying daemon osd.1 on compute-0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bdev(0x56464d217400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs mount
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluefs mount shared_bdev_used = 4718592
Nov 22 05:25:41 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Git sha 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DB SUMMARY
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DB Session ID:  CK3ECG8VCYUAVQEDRRWF
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                     Options.env: 0x56464d3807e0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                Options.info_log: 0x56464c3e4540
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.write_buffer_manager: 0x56464d2f0460
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:41 compute-0 systemd[1]: Started libpod-conmon-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope.
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Compression algorithms supported:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 podman[90186]: 2025-11-22 05:25:41.854452546 +0000 UTC m=+0.039555305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4980)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d11f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56464c3e4300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56464c3d1090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d3f0979e-ad59-462b-9b2c-5d2aa2e48d80
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141967643, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141971810, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141975229, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141978575, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789141, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d3f0979e-ad59-462b-9b2c-5d2aa2e48d80", "db_session_id": "CK3ECG8VCYUAVQEDRRWF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789141980454, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:41 compute-0 ceph-osd[89779]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 05:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:41 compute-0 podman[90186]: 2025-11-22 05:25:41.992640745 +0000 UTC m=+0.177743504 container init 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:25:42 compute-0 podman[90186]: 2025-11-22 05:25:42.005422367 +0000 UTC m=+0.190525026 container start 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:25:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56464c53e000
Nov 22 05:25:42 compute-0 ceph-osd[89779]: rocksdb: DB pointer 0x56464d2d9a00
Nov 22 05:25:42 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:42 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 22 05:25:42 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 22 05:25:42 compute-0 podman[90186]: 2025-11-22 05:25:42.009386082 +0000 UTC m=+0.194488781 container attach 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:25:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:25:42 compute-0 ceph-osd[89779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 05:25:42 compute-0 ceph-osd[89779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 05:25:42 compute-0 ceph-osd[89779]: _get_class not permitted to load lua
Nov 22 05:25:42 compute-0 ceph-osd[89779]: _get_class not permitted to load sdk
Nov 22 05:25:42 compute-0 ceph-osd[89779]: _get_class not permitted to load test_remote_reads
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 load_pgs
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 load_pgs opened 0 pgs
Nov 22 05:25:42 compute-0 ceph-osd[89779]: osd.0 0 log_to_monitors true
Nov 22 05:25:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:42.018+0000 7f35e4ed6740 -1 osd.0 0 log_to_monitors true
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 05:25:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 05:25:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]:                             [--no-systemd] [--no-tmpfs]
Nov 22 05:25:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test[90206]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 05:25:42 compute-0 systemd[1]: libpod-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope: Deactivated successfully.
Nov 22 05:25:42 compute-0 podman[90186]: 2025-11-22 05:25:42.627115619 +0000 UTC m=+0.812218308 container died 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b8c7fe204df949e2e8f8b52ba0a6fcff4d5eb3b544699ee75963ec95b41c530-merged.mount: Deactivated successfully.
Nov 22 05:25:42 compute-0 podman[90186]: 2025-11-22 05:25:42.709588544 +0000 UTC m=+0.894691213 container remove 4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate-test, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:42 compute-0 systemd[1]: libpod-conmon-4aa4fd8ec81e2ea52c2128041bcb5c083691062d52129419aad36755b876a97a.scope: Deactivated successfully.
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:42 compute-0 ceph-mon[75840]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:42 compute-0 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:42 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:42 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:42 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:42 compute-0 systemd[1]: Reloading.
Nov 22 05:25:43 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 05:25:43 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 05:25:43 compute-0 systemd-sysv-generator[90485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:43 compute-0 systemd-rc-local-generator[90481]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:43 compute-0 systemd[1]: Reloading.
Nov 22 05:25:43 compute-0 systemd-rc-local-generator[90517]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:43 compute-0 systemd-sysv-generator[90520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:43 compute-0 systemd[1]: Starting Ceph osd.1 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:25:43
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [balancer INFO root] No pools available
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 done with init, starting boot process
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 start_boot
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 05:25:43 compute-0 ceph-osd[89779]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:43 compute-0 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 05:25:43 compute-0 ceph-mon[75840]: osdmap e7: 3 total, 0 up, 3 in
Nov 22 05:25:43 compute-0 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:43 compute-0 podman[90575]: 2025-11-22 05:25:43.975439676 +0000 UTC m=+0.090119708 container create cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:25:44 compute-0 podman[90575]: 2025-11-22 05:25:43.934831358 +0000 UTC m=+0.049511440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:44 compute-0 podman[90575]: 2025-11-22 05:25:44.079683965 +0000 UTC m=+0.194363987 container init cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:25:44 compute-0 podman[90575]: 2025-11-22 05:25:44.09063938 +0000 UTC m=+0.205319382 container start cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:44 compute-0 podman[90575]: 2025-11-22 05:25:44.11766385 +0000 UTC m=+0.232343872 container attach cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:25:44 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:44 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:44 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:44 compute-0 ceph-mon[75840]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:44 compute-0 ceph-mon[75840]: osdmap e8: 3 total, 0 up, 3 in
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:45 compute-0 bash[90575]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 05:25:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate[90590]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 05:25:45 compute-0 bash[90575]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 05:25:45 compute-0 systemd[1]: libpod-cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11.scope: Deactivated successfully.
Nov 22 05:25:45 compute-0 podman[90575]: 2025-11-22 05:25:45.254147031 +0000 UTC m=+1.368827053 container died cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:45 compute-0 systemd[1]: libpod-cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11.scope: Consumed 1.180s CPU time.
Nov 22 05:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ea24e52ae375f804d17beb922d90f54addef66eb25e0b819d344a2ae3c7bc54-merged.mount: Deactivated successfully.
Nov 22 05:25:45 compute-0 podman[90575]: 2025-11-22 05:25:45.378097702 +0000 UTC m=+1.492777744 container remove cbc1c6065b5fe5632e5ec346abc646ce088fae39bc6a3701de2cbf9f8108bb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1-activate, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:25:45 compute-0 podman[90765]: 2025-11-22 05:25:45.652368431 +0000 UTC m=+0.054118563 container create 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:25:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:45 compute-0 podman[90765]: 2025-11-22 05:25:45.627373796 +0000 UTC m=+0.029123958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b0f7410e882bd2e995f9b46458b07527a8a971592495e909dd30a3be193fd99/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:45 compute-0 podman[90765]: 2025-11-22 05:25:45.766680188 +0000 UTC m=+0.168430351 container init 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:25:45 compute-0 podman[90765]: 2025-11-22 05:25:45.777920192 +0000 UTC m=+0.179670314 container start 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:45 compute-0 ceph-osd[90784]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:25:45 compute-0 ceph-osd[90784]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 05:25:45 compute-0 ceph-osd[90784]: pidfile_write: ignore empty --pid-file
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 05:25:45 compute-0 ceph-osd[90784]: bdev(0x55e99ea8d800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 05:25:45 compute-0 bash[90765]: 4bf032245a1589c409446c225d8eba4901df306285abea444b6567ed4ebf9a01
Nov 22 05:25:45 compute-0 systemd[1]: Started Ceph osd.1 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:45 compute-0 sudo[89867]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:45 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:45 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 22 05:25:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 05:25:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:46 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 22 05:25:46 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 22 05:25:46 compute-0 ceph-mon[75840]: purged_snaps scrub starts
Nov 22 05:25:46 compute-0 ceph-mon[75840]: purged_snaps scrub ok
Nov 22 05:25:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99dc53800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 05:25:46 compute-0 sudo[90797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:46 compute-0 sudo[90797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:46 compute-0 sudo[90797]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:46 compute-0 sudo[90822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:46 compute-0 sudo[90822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:46 compute-0 sudo[90822]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:46 compute-0 sudo[90849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:46 compute-0 sudo[90849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:46 compute-0 sudo[90849]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:46 compute-0 ceph-osd[90784]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 22 05:25:46 compute-0 sudo[90874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: load: jerasure load: lrc 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:46 compute-0 sudo[90874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 05:25:46 compute-0 podman[90947]: 2025-11-22 05:25:46.860694392 +0000 UTC m=+0.098774888 container create cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:46 compute-0 podman[90947]: 2025-11-22 05:25:46.802262045 +0000 UTC m=+0.040342591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 05:25:46 compute-0 ceph-osd[90784]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluefs mount
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluefs mount shared_bdev_used = 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Git sha 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: DB SUMMARY
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: DB Session ID:  0I8ZSKYF4TFY47RR8FK5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                                     Options.env: 0x55e99eadfc70
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                                Options.info_log: 0x55e99dcda8a0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.write_buffer_manager: 0x55e99ebe6460
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Compression algorithms supported:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56f38230-0c37-49fb-a62a-cda82e58aaf5
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789146953221, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789146953620, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 22 05:25:46 compute-0 ceph-osd[90784]: freelist init
Nov 22 05:25:46 compute-0 ceph-osd[90784]: freelist _read_cfg
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 05:25:46 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bluefs umount
Nov 22 05:25:46 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 05:25:46 compute-0 systemd[1]: Started libpod-conmon-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope.
Nov 22 05:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:47 compute-0 sudo[91183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvdweavfkndsjnifswqbrnnrniqqymfg ; /usr/bin/python3'
Nov 22 05:25:47 compute-0 sudo[91183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:25:47 compute-0 podman[90947]: 2025-11-22 05:25:47.120773856 +0000 UTC m=+0.358854342 container init cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:47 compute-0 podman[90947]: 2025-11-22 05:25:47.134722105 +0000 UTC m=+0.372802571 container start cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:25:47 compute-0 trusting_murdock[91158]: 167 167
Nov 22 05:25:47 compute-0 systemd[1]: libpod-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope: Deactivated successfully.
Nov 22 05:25:47 compute-0 conmon[91158]: conmon cee90b474632913a3330 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope/container/memory.events
Nov 22 05:25:47 compute-0 ceph-mon[75840]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 05:25:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:47 compute-0 ceph-mon[75840]: Deploying daemon osd.2 on compute-0
Nov 22 05:25:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bdev(0x55e99eb0f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluefs mount
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluefs mount shared_bdev_used = 4718592
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Git sha 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: DB SUMMARY
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: DB Session ID:  0I8ZSKYF4TFY47RR8FK4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                                     Options.env: 0x55e99ec8e460
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                                Options.info_log: 0x55e99dcda620
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.write_buffer_manager: 0x55e99ebe6460
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Compression algorithms supported:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcdaa20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e99dcda380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e99dcc7090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 56f38230-0c37-49fb-a62a-cda82e58aaf5
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147221890, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:47 compute-0 python3[91185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:25:47 compute-0 podman[90947]: 2025-11-22 05:25:47.261491715 +0000 UTC m=+0.499572201 container attach cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:25:47 compute-0 podman[90947]: 2025-11-22 05:25:47.262041782 +0000 UTC m=+0.500122268 container died cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147276384, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147369074, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147376637, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789147, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "56f38230-0c37-49fb-a62a-cda82e58aaf5", "db_session_id": "0I8ZSKYF4TFY47RR8FK4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789147391282, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 05:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3efe335e144baf4c5adb77d92d25728036f875d1131c7188f58074164177fcb-merged.mount: Deactivated successfully.
Nov 22 05:25:47 compute-0 podman[90947]: 2025-11-22 05:25:47.42819494 +0000 UTC m=+0.666275396 container remove cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_murdock, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:25:47 compute-0 podman[91384]: 2025-11-22 05:25:47.451112261 +0000 UTC m=+0.198023472 container create 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e99de34000
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: DB pointer 0x55e99ebcfa00
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 22 05:25:47 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:25:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:25:47 compute-0 ceph-osd[90784]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 05:25:47 compute-0 ceph-osd[90784]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 05:25:47 compute-0 ceph-osd[90784]: _get_class not permitted to load lua
Nov 22 05:25:47 compute-0 ceph-osd[90784]: _get_class not permitted to load sdk
Nov 22 05:25:47 compute-0 ceph-osd[90784]: _get_class not permitted to load test_remote_reads
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 load_pgs
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 load_pgs opened 0 pgs
Nov 22 05:25:47 compute-0 ceph-osd[90784]: osd.1 0 log_to_monitors true
Nov 22 05:25:47 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:47.462+0000 7fd69825f740 -1 osd.1 0 log_to_monitors true
Nov 22 05:25:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 22 05:25:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 05:25:47 compute-0 systemd[1]: Started libpod-conmon-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope.
Nov 22 05:25:47 compute-0 podman[91384]: 2025-11-22 05:25:47.391131403 +0000 UTC m=+0.138042694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 systemd[1]: libpod-conmon-cee90b474632913a333026072ec3e6121a1acb5bd6c6bfd0483b6829bca6dba7.scope: Deactivated successfully.
Nov 22 05:25:47 compute-0 podman[91384]: 2025-11-22 05:25:47.547197105 +0000 UTC m=+0.294108326 container init 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:47 compute-0 podman[91384]: 2025-11-22 05:25:47.553946097 +0000 UTC m=+0.300857308 container start 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:47 compute-0 podman[91384]: 2025-11-22 05:25:47.567647918 +0000 UTC m=+0.314559129 container attach 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:25:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:47 compute-0 podman[91452]: 2025-11-22 05:25:47.777616254 +0000 UTC m=+0.076901610 container create 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:47 compute-0 podman[91452]: 2025-11-22 05:25:47.736716368 +0000 UTC m=+0.036001784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:47 compute-0 systemd[1]: Started libpod-conmon-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope.
Nov 22 05:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:47 compute-0 podman[91452]: 2025-11-22 05:25:47.882638139 +0000 UTC m=+0.181923485 container init 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:25:47 compute-0 podman[91452]: 2025-11-22 05:25:47.891654323 +0000 UTC m=+0.190939679 container start 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:47 compute-0 podman[91452]: 2025-11-22 05:25:47.898103336 +0000 UTC m=+0.197388682 container attach 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:25:47 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:47 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:48 compute-0 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887174337' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:25:48 compute-0 jovial_goodall[91437]: 
Nov 22 05:25:48 compute-0 jovial_goodall[91437]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":111,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":9,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T05:25:45.657644+0000","services":{}},"progress_events":{}}
Nov 22 05:25:48 compute-0 systemd[1]: libpod-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope: Deactivated successfully.
Nov 22 05:25:48 compute-0 podman[91384]: 2025-11-22 05:25:48.219074726 +0000 UTC m=+0.965985937 container died 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a72c6b55f02fcd43757a8ac081074bc910544195663a1f0e8a86ec97de87c4-merged.mount: Deactivated successfully.
Nov 22 05:25:48 compute-0 podman[91384]: 2025-11-22 05:25:48.268954795 +0000 UTC m=+1.015866006 container remove 2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1 (image=quay.io/ceph/ceph:v18, name=jovial_goodall, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:25:48 compute-0 systemd[1]: libpod-conmon-2aada67d8de48ddd8b17368b148a481b794e93f6b00fbf2859150371f80db1e1.scope: Deactivated successfully.
Nov 22 05:25:48 compute-0 sudo[91183]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.553 iops: 8333.664 elapsed_sec: 0.360
Nov 22 05:25:48 compute-0 ceph-osd[89779]: log_channel(cluster) log [WRN] : OSD bench result of 8333.664434 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 0 waiting for initial osdmap
Nov 22 05:25:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:48.297+0000 7f35e166d640 -1 osd.0 0 waiting for initial osdmap
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 22 05:25:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-0[89775]: 2025-11-22T05:25:48.322+0000 7f35dc47e640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:25:48 compute-0 ceph-osd[89779]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 22 05:25:48 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 05:25:48 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 05:25:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 05:25:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]:                             [--no-systemd] [--no-tmpfs]
Nov 22 05:25:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test[91468]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 05:25:48 compute-0 systemd[1]: libpod-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope: Deactivated successfully.
Nov 22 05:25:48 compute-0 podman[91452]: 2025-11-22 05:25:48.545198388 +0000 UTC m=+0.844483754 container died 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d81078eb1b9996ec0251bc5cd6a753a8c87ea1c0238b4b61d7bf7981efc9916-merged.mount: Deactivated successfully.
Nov 22 05:25:48 compute-0 podman[91452]: 2025-11-22 05:25:48.636081316 +0000 UTC m=+0.935366632 container remove 8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:25:48 compute-0 systemd[1]: libpod-conmon-8931d1fca6606c156397f07ff3ee72aa1e34a99f35796e3e2aad8594f3058585.scope: Deactivated successfully.
Nov 22 05:25:48 compute-0 systemd[1]: Reloading.
Nov 22 05:25:48 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/753438453; not ready for session (expect reconnect)
Nov 22 05:25:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:48 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 05:25:48 compute-0 systemd-sysv-generator[91572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:48 compute-0 systemd-rc-local-generator[91568]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 done with init, starting boot process
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 start_boot
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 05:25:49 compute-0 ceph-osd[90784]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453] boot
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:49 compute-0 ceph-osd[89779]: osd.0 10 state: booting -> active
Nov 22 05:25:49 compute-0 ceph-mon[75840]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 05:25:49 compute-0 ceph-mon[75840]: osdmap e9: 3 total, 0 up, 3 in
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2887174337' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mon[75840]: OSD bench result of 8333.664434 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:25:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:49 compute-0 systemd[1]: Reloading.
Nov 22 05:25:49 compute-0 systemd-rc-local-generator[91606]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:25:49 compute-0 systemd-sysv-generator[91611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:25:49 compute-0 systemd[1]: Starting Ceph osd.2 for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:49 compute-0 ceph-mgr[76134]: [devicehealth INFO root] creating mgr pool
Nov 22 05:25:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 22 05:25:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 05:25:49 compute-0 podman[91665]: 2025-11-22 05:25:49.865188382 +0000 UTC m=+0.097541819 container create b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:25:49 compute-0 podman[91665]: 2025-11-22 05:25:49.809015095 +0000 UTC m=+0.041368572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:49 compute-0 podman[91665]: 2025-11-22 05:25:49.992400675 +0000 UTC m=+0.224754072 container init b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:25:50 compute-0 podman[91665]: 2025-11-22 05:25:50.000238082 +0000 UTC m=+0.232591509 container start b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:50 compute-0 podman[91665]: 2025-11-22 05:25:50.014632585 +0000 UTC m=+0.246986022 container attach b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:50 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:50 compute-0 ceph-mon[75840]: osd.0 [v2:192.168.122.100:6802/753438453,v1:192.168.122.100:6803/753438453] boot
Nov 22 05:25:50 compute-0 ceph-mon[75840]: osdmap e10: 3 total, 1 up, 3 in
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:50 compute-0 ceph-osd[89779]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 05:25:50 compute-0 ceph-osd[89779]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 22 05:25:50 compute-0 ceph-osd[89779]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 22 05:25:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 05:25:50 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:51 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 22 05:25:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:51 compute-0 ceph-mon[75840]: purged_snaps scrub starts
Nov 22 05:25:51 compute-0 ceph-mon[75840]: purged_snaps scrub ok
Nov 22 05:25:51 compute-0 ceph-mon[75840]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 05:25:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 05:25:51 compute-0 ceph-mon[75840]: osdmap e11: 3 total, 1 up, 3 in
Nov 22 05:25:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 05:25:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 22 05:25:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 22 05:25:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:51 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:51 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:51 compute-0 bash[91665]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 05:25:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate[91682]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 05:25:51 compute-0 bash[91665]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 05:25:51 compute-0 systemd[1]: libpod-b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e.scope: Deactivated successfully.
Nov 22 05:25:51 compute-0 podman[91665]: 2025-11-22 05:25:51.524933618 +0000 UTC m=+1.757287035 container died b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:25:51 compute-0 systemd[1]: libpod-b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e.scope: Consumed 1.532s CPU time.
Nov 22 05:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c1ef64cc6053cde146e436e7256b09454eaf0f2e6c28ba9f283ab70e4c3b5c-merged.mount: Deactivated successfully.
Nov 22 05:25:51 compute-0 podman[91665]: 2025-11-22 05:25:51.63909794 +0000 UTC m=+1.871451347 container remove b4f48068859f29688585bb58f48dd1ace8620015e205e97cd8fe8b01bcbb3b3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:25:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 05:25:51 compute-0 podman[91862]: 2025-11-22 05:25:51.936018642 +0000 UTC m=+0.079792411 container create 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:51 compute-0 podman[91862]: 2025-11-22 05:25:51.89493132 +0000 UTC m=+0.038705139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6570fda2b850618740589607852b0cfe51c424d6734fab4d893875f191a99f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:52 compute-0 podman[91862]: 2025-11-22 05:25:52.030157836 +0000 UTC m=+0.173931605 container init 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:52 compute-0 podman[91862]: 2025-11-22 05:25:52.04779097 +0000 UTC m=+0.191564729 container start 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:25:52 compute-0 bash[91862]: 320c74d221262d786394cade06cc61f708c64820d5b4699634e55389e05c94eb
Nov 22 05:25:52 compute-0 systemd[1]: Started Ceph osd.2 for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:25:52 compute-0 ceph-osd[91881]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:25:52 compute-0 ceph-osd[91881]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 05:25:52 compute-0 ceph-osd[91881]: pidfile_write: ignore empty --pid-file
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27adcb800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 05:25:52 compute-0 sudo[90874]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:52 compute-0 sudo[91894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:52 compute-0 sudo[91894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:52 compute-0 sudo[91894]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:52 compute-0 sudo[91919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:52 compute-0 sudo[91919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:52 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:52 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:52 compute-0 sudo[91919]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c279f93800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 05:25:52 compute-0 ceph-mon[75840]: osdmap e12: 3 total, 1 up, 3 in
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:52 compute-0 ceph-mon[75840]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:52 compute-0 sudo[91944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:52 compute-0 sudo[91944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:52 compute-0 sudo[91944]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:52 compute-0 sudo[91971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:25:52 compute-0 sudo[91971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:52 compute-0 ceph-osd[91881]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 22 05:25:52 compute-0 ceph-osd[91881]: load: jerasure load: lrc 
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:52 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 05:25:52 compute-0 podman[92040]: 2025-11-22 05:25:52.996688328 +0000 UTC m=+0.095436414 container create 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:52.940669776 +0000 UTC m=+0.039417912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:53 compute-0 systemd[1]: Started libpod-conmon-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope.
Nov 22 05:25:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:53.143062233 +0000 UTC m=+0.241810309 container init 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:53.152328265 +0000 UTC m=+0.251076351 container start 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:53 compute-0 eloquent_moore[92060]: 167 167
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:53.159570153 +0000 UTC m=+0.258318229 container attach 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:25:53 compute-0 systemd[1]: libpod-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope: Deactivated successfully.
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:53.162972911 +0000 UTC m=+0.261720967 container died 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5ec00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-05264c2ba5da3af8d578e4da997979f4f65ef7c4379dadd836dcf985522454c4-merged.mount: Deactivated successfully.
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs mount
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs mount shared_bdev_used = 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Git sha 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DB SUMMARY
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DB Session ID:  2LZ21EPTPTNW4W1U2F4C
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                     Options.env: 0x55c27ae1dd50
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                Options.info_log: 0x55c27a01ea40
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.write_buffer_manager: 0x55c27af2e460
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Compression algorithms supported:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f0e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 podman[92040]: 2025-11-22 05:25:53.234274524 +0000 UTC m=+0.333022560 container remove 71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01f080)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 00d3a8ab-719a-4a16-94c2-99fe9381ec3c
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153240754, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153240995, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: freelist init
Nov 22 05:25:53 compute-0 ceph-osd[91881]: freelist _read_cfg
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs umount
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 05:25:53 compute-0 systemd[1]: libpod-conmon-71c5a4d0a20e0d812135a77bd9f4953c84ff71e2c7f613cf1c7beb2128be719d.scope: Deactivated successfully.
Nov 22 05:25:53 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:53 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:53 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:53 compute-0 podman[92276]: 2025-11-22 05:25:53.39051812 +0000 UTC m=+0.053863115 container create 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:25:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.910 iops: 8424.898 elapsed_sec: 0.356
Nov 22 05:25:53 compute-0 ceph-osd[90784]: log_channel(cluster) log [WRN] : OSD bench result of 8424.897601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:25:53 compute-0 systemd[1]: Started libpod-conmon-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope.
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 0 waiting for initial osdmap
Nov 22 05:25:53 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:53.441+0000 7fd6949f6640 -1 osd.1 0 waiting for initial osdmap
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Nov 22 05:25:53 compute-0 podman[92276]: 2025-11-22 05:25:53.363379867 +0000 UTC m=+0.026724892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 set_numa_affinity not setting numa affinity
Nov 22 05:25:53 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-1[90780]: 2025-11-22T05:25:53.466+0000 7fd68f807640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:25:53 compute-0 ceph-osd[90784]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 22 05:25:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bdev(0x55c27ae5f400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs mount
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluefs mount shared_bdev_used = 4718592
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 05:25:53 compute-0 podman[92276]: 2025-11-22 05:25:53.498444536 +0000 UTC m=+0.161789531 container init 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: RocksDB version: 7.9.2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Git sha 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DB SUMMARY
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DB Session ID:  2LZ21EPTPTNW4W1U2F4D
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: CURRENT file:  CURRENT
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.error_if_exists: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.create_if_missing: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                     Options.env: 0x55c27afde3f0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                Options.info_log: 0x55c27a01e800
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.statistics: (nil)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.use_fsync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.db_log_dir: 
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.write_buffer_manager: 0x55c27af2e460
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.unordered_write: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.row_cache: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                              Options.wal_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.two_write_queues: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.wal_compression: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.atomic_flush: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_background_jobs: 4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_background_compactions: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_subcompactions: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.max_open_files: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Compression algorithms supported:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZSTD supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kXpressCompression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kBZip2Compression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kLZ4Compression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kZlibCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         kSnappyCompression supported: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 podman[92276]: 2025-11-22 05:25:53.508163712 +0000 UTC m=+0.171508737 container start 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 podman[92276]: 2025-11-22 05:25:53.512717645 +0000 UTC m=+0.176062650 container attach 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01bfa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:           Options.merge_operator: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c27a01e5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c27a006430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.compression: LZ4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.num_levels: 7
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.bloom_locality: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                               Options.ttl: 2592000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                       Options.enable_blob_files: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                           Options.min_blob_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 00d3a8ab-719a-4a16-94c2-99fe9381ec3c
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153520801, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153525144, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153528038, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153531051, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789153, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "00d3a8ab-719a-4a16-94c2-99fe9381ec3c", "db_session_id": "2LZ21EPTPTNW4W1U2F4D", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789153534093, "job": 1, "event": "recovery_finished"}
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c27b00e000
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: DB pointer 0x55c27a041a00
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 22 05:25:53 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:25:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:25:53 compute-0 ceph-osd[91881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 05:25:53 compute-0 ceph-osd[91881]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 05:25:53 compute-0 ceph-osd[91881]: _get_class not permitted to load lua
Nov 22 05:25:53 compute-0 ceph-osd[91881]: _get_class not permitted to load sdk
Nov 22 05:25:53 compute-0 ceph-osd[91881]: _get_class not permitted to load test_remote_reads
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 load_pgs
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 load_pgs opened 0 pgs
Nov 22 05:25:53 compute-0 ceph-osd[91881]: osd.2 0 log_to_monitors true
Nov 22 05:25:53 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:25:53.562+0000 7f68b4f02740 -1 osd.2 0 log_to_monitors true
Nov 22 05:25:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 22 05:25:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 05:25:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 05:25:54 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/167946803; not ready for session (expect reconnect)
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 22 05:25:54 compute-0 ceph-mon[75840]: OSD bench result of 8424.897601 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:25:54 compute-0 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mon[75840]: pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 05:25:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803] boot
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:54 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:54 compute-0 ceph-osd[90784]: osd.1 13 state: booting -> active
Nov 22 05:25:54 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]: {
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_id": 1,
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "type": "bluestore"
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     },
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_id": 2,
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "type": "bluestore"
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     },
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_id": 0,
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:         "type": "bluestore"
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]:     }
Nov 22 05:25:54 compute-0 lucid_bhabha[92293]: }
Nov 22 05:25:54 compute-0 systemd[1]: libpod-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Deactivated successfully.
Nov 22 05:25:54 compute-0 podman[92276]: 2025-11-22 05:25:54.509877182 +0000 UTC m=+1.173222177 container died 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:25:54 compute-0 systemd[1]: libpod-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Consumed 1.010s CPU time.
Nov 22 05:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db82911a04e6d6e0bd58202753d753823601166fbc74da845791ee14386257f-merged.mount: Deactivated successfully.
Nov 22 05:25:54 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 05:25:54 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 05:25:54 compute-0 podman[92276]: 2025-11-22 05:25:54.57177403 +0000 UTC m=+1.235119055 container remove 9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:54 compute-0 systemd[1]: libpod-conmon-9d7f2d0dff34ae8f47b6b8a8b33f28bcb34b430378596a37624a7cf93f6142fa.scope: Deactivated successfully.
Nov 22 05:25:54 compute-0 sudo[91971]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:54 compute-0 sudo[92557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:54 compute-0 sudo[92557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:54 compute-0 sudo[92557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:54 compute-0 sudo[92582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:25:54 compute-0 sudo[92582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:54 compute-0 sudo[92582]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:54 compute-0 sudo[92607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:54 compute-0 sudo[92607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:54 compute-0 sudo[92607]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:55 compute-0 sudo[92632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:55 compute-0 sudo[92632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:55 compute-0 sudo[92632]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:55 compute-0 sudo[92657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:55 compute-0 sudo[92657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:55 compute-0 sudo[92657]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:55 compute-0 sudo[92682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:25:55 compute-0 sudo[92682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 22 05:25:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 done with init, starting boot process
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 start_boot
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 05:25:55 compute-0 ceph-osd[91881]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 22 05:25:55 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 22 05:25:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:25:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 05:25:55 compute-0 ceph-mon[75840]: osd.1 [v2:192.168.122.100:6806/167946803,v1:192.168.122.100:6807/167946803] boot
Nov 22 05:25:55 compute-0 ceph-mon[75840]: osdmap e13: 3 total, 2 up, 3 in
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: [devicehealth INFO root] creating main.db for devicehealth
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:25:55 compute-0 podman[92779]: 2025-11-22 05:25:55.814570845 +0000 UTC m=+0.109261069 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:25:55 compute-0 podman[92779]: 2025-11-22 05:25:55.927726896 +0000 UTC m=+0.222417030 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 05:25:55 compute-0 ceph-mgr[76134]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 22 05:25:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 05:25:55 compute-0 sudo[92821]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 22 05:25:55 compute-0 sudo[92821]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 05:25:55 compute-0 sudo[92821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 22 05:25:56 compute-0 sudo[92821]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 22 05:25:56 compute-0 sudo[92682]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 05:25:56 compute-0 ceph-mon[75840]: osdmap e14: 3 total, 2 up, 3 in
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mon[75840]: pgmap v41: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:56 compute-0 sudo[92910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:56 compute-0 sudo[92910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:56 compute-0 sudo[92910]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:25:57 compute-0 sudo[92935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:25:57 compute-0 sudo[92935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:57 compute-0 sudo[92935]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:57 compute-0 sudo[92960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:57 compute-0 sudo[92960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:57 compute-0 sudo[92960]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:57 compute-0 sudo[92985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- inventory --format=json-pretty --filter-for-batch
Nov 22 05:25:57 compute-0 sudo[92985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:57 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:25:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:57 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.546725569 +0000 UTC m=+0.047064411 container create be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:25:57 compute-0 systemd[1]: Started libpod-conmon-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope.
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.518599444 +0000 UTC m=+0.018938276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.646127167 +0000 UTC m=+0.146466019 container init be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.653419927 +0000 UTC m=+0.153758749 container start be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:25:57 compute-0 vigorous_moore[93067]: 167 167
Nov 22 05:25:57 compute-0 systemd[1]: libpod-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope: Deactivated successfully.
Nov 22 05:25:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.667641654 +0000 UTC m=+0.167980686 container attach be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.668521791 +0000 UTC m=+0.168860603 container died be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-06cba4066fbea54b3f9ad622c89ef008469f75a166d525da5745a9acd730c620-merged.mount: Deactivated successfully.
Nov 22 05:25:57 compute-0 podman[93051]: 2025-11-22 05:25:57.78256162 +0000 UTC m=+0.282900432 container remove be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:25:57 compute-0 systemd[1]: libpod-conmon-be06e1f20f388d2b9053b5be2718fa43baafa038e9ec7c67de337a0329495a9e.scope: Deactivated successfully.
Nov 22 05:25:57 compute-0 ceph-mon[75840]: purged_snaps scrub starts
Nov 22 05:25:57 compute-0 ceph-mon[75840]: purged_snaps scrub ok
Nov 22 05:25:57 compute-0 ceph-mon[75840]: osdmap e15: 3 total, 2 up, 3 in
Nov 22 05:25:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mscchl(active, since 74s)
Nov 22 05:25:57 compute-0 podman[93095]: 2025-11-22 05:25:57.993798777 +0000 UTC m=+0.055833848 container create a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 22 05:25:58 compute-0 systemd[1]: Started libpod-conmon-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope.
Nov 22 05:25:58 compute-0 podman[93095]: 2025-11-22 05:25:57.969979538 +0000 UTC m=+0.032014629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:25:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:25:58 compute-0 podman[93095]: 2025-11-22 05:25:58.142198117 +0000 UTC m=+0.204233228 container init a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:25:58 compute-0 podman[93095]: 2025-11-22 05:25:58.153131881 +0000 UTC m=+0.215166982 container start a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:25:58 compute-0 podman[93095]: 2025-11-22 05:25:58.178419657 +0000 UTC m=+0.240454758 container attach a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:25:58 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:25:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:58 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:58 compute-0 ceph-mon[75840]: pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:25:58 compute-0 ceph-mon[75840]: mgrmap e9: compute-0.mscchl(active, since 74s)
Nov 22 05:25:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]: [
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:     {
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "available": false,
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "ceph_device": false,
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "lsm_data": {},
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "lvs": [],
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "path": "/dev/sr0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "rejected_reasons": [
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "Insufficient space (<5GB)",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "Has a FileSystem"
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         ],
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         "sys_api": {
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "actuators": null,
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "device_nodes": "sr0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "devname": "sr0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "human_readable_size": "482.00 KB",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "id_bus": "ata",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "model": "QEMU DVD-ROM",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "nr_requests": "2",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "parent": "/dev/sr0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "partitions": {},
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "path": "/dev/sr0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "removable": "1",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "rev": "2.5+",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "ro": "0",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "rotational": "1",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "sas_address": "",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "sas_device_handle": "",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "scheduler_mode": "mq-deadline",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "sectors": 0,
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "sectorsize": "2048",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "size": 493568.0,
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "support_discard": "2048",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "type": "disk",
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:             "vendor": "QEMU"
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:         }
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]:     }
Nov 22 05:25:59 compute-0 unruffled_bhaskara[93113]: ]
Nov 22 05:25:59 compute-0 systemd[1]: libpod-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Deactivated successfully.
Nov 22 05:25:59 compute-0 podman[93095]: 2025-11-22 05:25:59.603372743 +0000 UTC m=+1.665407864 container died a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:25:59 compute-0 systemd[1]: libpod-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Consumed 1.483s CPU time.
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4075c014663bdd6a8e227682e04da8ebd83a4861501fded8969d8d9d49fe151c-merged.mount: Deactivated successfully.
Nov 22 05:25:59 compute-0 podman[93095]: 2025-11-22 05:25:59.713144068 +0000 UTC m=+1.775179169 container remove a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bhaskara, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:25:59 compute-0 systemd[1]: libpod-conmon-a608676e1d90584aed38ee5bdcdd212908f2286b5db37bd8f98e1a48e864d4f6.scope: Deactivated successfully.
Nov 22 05:25:59 compute-0 sudo[92985]: pam_unix(sudo:session): session closed for user root
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 978a8c5b-031a-422a-9d3e-ea4133a49b4b does not exist
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 65848e5d-ba47-4ec3-af77-b3fe8501ef1a does not exist
Nov 22 05:25:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ace50b9c-d33f-430f-9735-6f9756f407f4 does not exist
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:25:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:25:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:25:59 compute-0 sudo[94753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:25:59 compute-0 sudo[94753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:25:59 compute-0 sudo[94753]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:00 compute-0 sudo[94778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:00 compute-0 sudo[94778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:00 compute-0 sudo[94778]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:00 compute-0 sudo[94803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:00 compute-0 sudo[94803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:00 compute-0 sudo[94803]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:00 compute-0 sudo[94828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:26:00 compute-0 sudo[94828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 20.090 iops: 5143.141 elapsed_sec: 0.583
Nov 22 05:26:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [WRN] : OSD bench result of 5143.140774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 0 waiting for initial osdmap
Nov 22 05:26:00 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:26:00.253+0000 7f68b1699640 -1 osd.2 0 waiting for initial osdmap
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 check_osdmap_features require_osd_release unknown -> reef
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:26:00 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-osd-2[91877]: 2025-11-22T05:26:00.293+0000 7f68ac4aa640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 set_numa_affinity not setting numa affinity
Nov 22 05:26:00 compute-0 ceph-osd[91881]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 22 05:26:00 compute-0 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/237290084; not ready for session (expect reconnect)
Nov 22 05:26:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:26:00 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:26:00 compute-0 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.570189396 +0000 UTC m=+0.068995632 container create ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:00 compute-0 systemd[1]: Started libpod-conmon-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope.
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.538275111 +0000 UTC m=+0.037081377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.666978921 +0000 UTC m=+0.165785197 container init ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.678732121 +0000 UTC m=+0.177538357 container start ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:26:00 compute-0 pensive_solomon[94911]: 167 167
Nov 22 05:26:00 compute-0 systemd[1]: libpod-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope: Deactivated successfully.
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.686098053 +0000 UTC m=+0.184904289 container attach ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.687143715 +0000 UTC m=+0.185949951 container died ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-19e4efbbd7b6ec0e085cd99d0309bba92ff4727a1271084bda39fa2c0dd535f5-merged.mount: Deactivated successfully.
Nov 22 05:26:00 compute-0 podman[94894]: 2025-11-22 05:26:00.757135758 +0000 UTC m=+0.255941994 container remove ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_solomon, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:00 compute-0 systemd[1]: libpod-conmon-ab16b73e16a433599cbcf7689ed0110ddf62f0a89a4d0a4731328f6a68bd9e56.scope: Deactivated successfully.
Nov 22 05:26:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 22 05:26:00 compute-0 ceph-mon[75840]: pgmap v44: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 22 05:26:00 compute-0 ceph-mon[75840]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 22 05:26:00 compute-0 ceph-mon[75840]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 22 05:26:00 compute-0 ceph-mon[75840]: OSD bench result of 5143.140774 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 05:26:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:26:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Nov 22 05:26:01 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084] boot
Nov 22 05:26:01 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Nov 22 05:26:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 05:26:01 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:26:01 compute-0 ceph-osd[91881]: osd.2 16 state: booting -> active
Nov 22 05:26:01 compute-0 podman[94935]: 2025-11-22 05:26:01.018045278 +0000 UTC m=+0.085721358 container create ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:26:01 compute-0 podman[94935]: 2025-11-22 05:26:00.97839014 +0000 UTC m=+0.046066310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:01 compute-0 systemd[1]: Started libpod-conmon-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope.
Nov 22 05:26:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:01 compute-0 podman[94935]: 2025-11-22 05:26:01.12968336 +0000 UTC m=+0.197359450 container init ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:01 compute-0 podman[94935]: 2025-11-22 05:26:01.140982367 +0000 UTC m=+0.208658477 container start ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:26:01 compute-0 podman[94935]: 2025-11-22 05:26:01.150714612 +0000 UTC m=+0.218390732 container attach ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 05:26:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 22 05:26:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Nov 22 05:26:02 compute-0 ceph-mon[75840]: osd.2 [v2:192.168.122.100:6810/237290084,v1:192.168.122.100:6811/237290084] boot
Nov 22 05:26:02 compute-0 ceph-mon[75840]: osdmap e16: 3 total, 3 up, 3 in
Nov 22 05:26:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 05:26:02 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Nov 22 05:26:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:02 compute-0 hungry_torvalds[94951]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:26:02 compute-0 hungry_torvalds[94951]: --> relative data size: 1.0
Nov 22 05:26:02 compute-0 hungry_torvalds[94951]: --> All data devices are unavailable
Nov 22 05:26:02 compute-0 systemd[1]: libpod-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope: Deactivated successfully.
Nov 22 05:26:02 compute-0 podman[94935]: 2025-11-22 05:26:02.164298146 +0000 UTC m=+1.231974216 container died ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a7ff0baeef1f706b88e49000c42c2cd45f08971f43f16c0d8d28477be4073d-merged.mount: Deactivated successfully.
Nov 22 05:26:02 compute-0 podman[94935]: 2025-11-22 05:26:02.223629693 +0000 UTC m=+1.291305763 container remove ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:26:02 compute-0 systemd[1]: libpod-conmon-ef50f7c191f9f14b9ce08393d57aa34f952333bca1f7baea974574c59a72644d.scope: Deactivated successfully.
Nov 22 05:26:02 compute-0 sudo[94828]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:02 compute-0 sudo[94992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:02 compute-0 sudo[94992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:02 compute-0 sudo[94992]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:02 compute-0 sudo[95017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:02 compute-0 sudo[95017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:02 compute-0 sudo[95017]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:02 compute-0 sudo[95042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:02 compute-0 sudo[95042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:02 compute-0 sudo[95042]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:02 compute-0 sudo[95067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:26:02 compute-0 sudo[95067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:03 compute-0 ceph-mon[75840]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 05:26:03 compute-0 ceph-mon[75840]: osdmap e17: 3 total, 3 up, 3 in
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.058035599 +0000 UTC m=+0.066011068 container create c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:26:03 compute-0 systemd[1]: Started libpod-conmon-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope.
Nov 22 05:26:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.031794943 +0000 UTC m=+0.039770492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.143951502 +0000 UTC m=+0.151927061 container init c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.156980191 +0000 UTC m=+0.164955690 container start c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.162116723 +0000 UTC m=+0.170092292 container attach c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:26:03 compute-0 angry_booth[95147]: 167 167
Nov 22 05:26:03 compute-0 systemd[1]: libpod-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope: Deactivated successfully.
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.16645128 +0000 UTC m=+0.174426829 container died c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-facabab25a0a5e3bb88efd09e95d1f72d9d46c08207272643c664574aad0b4d3-merged.mount: Deactivated successfully.
Nov 22 05:26:03 compute-0 podman[95131]: 2025-11-22 05:26:03.218861739 +0000 UTC m=+0.226837278 container remove c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:26:03 compute-0 systemd[1]: libpod-conmon-c20a0afd36c7957ceab7b69f8f77fb166d3a9d3c482ed1f81863972427e3a704.scope: Deactivated successfully.
Nov 22 05:26:03 compute-0 podman[95171]: 2025-11-22 05:26:03.431985795 +0000 UTC m=+0.058273125 container create 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:26:03 compute-0 systemd[1]: Started libpod-conmon-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope.
Nov 22 05:26:03 compute-0 podman[95171]: 2025-11-22 05:26:03.410686685 +0000 UTC m=+0.036974045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:03 compute-0 podman[95171]: 2025-11-22 05:26:03.531761465 +0000 UTC m=+0.158048785 container init 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:26:03 compute-0 podman[95171]: 2025-11-22 05:26:03.547317614 +0000 UTC m=+0.173604934 container start 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:26:03 compute-0 podman[95171]: 2025-11-22 05:26:03.551627539 +0000 UTC m=+0.177914889 container attach 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:26:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 05:26:04 compute-0 goofy_pike[95187]: {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     "0": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "devices": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "/dev/loop3"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             ],
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_name": "ceph_lv0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_size": "21470642176",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "name": "ceph_lv0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "tags": {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.crush_device_class": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.encrypted": "0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_id": "0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.vdo": "0"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             },
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "vg_name": "ceph_vg0"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         }
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     ],
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     "1": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "devices": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "/dev/loop4"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             ],
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_name": "ceph_lv1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_size": "21470642176",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "name": "ceph_lv1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "tags": {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.crush_device_class": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.encrypted": "0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_id": "1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.vdo": "0"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             },
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "vg_name": "ceph_vg1"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         }
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     ],
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     "2": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "devices": [
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "/dev/loop5"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             ],
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_name": "ceph_lv2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_size": "21470642176",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "name": "ceph_lv2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "tags": {
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.crush_device_class": "",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.encrypted": "0",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osd_id": "2",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:                 "ceph.vdo": "0"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             },
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "type": "block",
Nov 22 05:26:04 compute-0 goofy_pike[95187]:             "vg_name": "ceph_vg2"
Nov 22 05:26:04 compute-0 goofy_pike[95187]:         }
Nov 22 05:26:04 compute-0 goofy_pike[95187]:     ]
Nov 22 05:26:04 compute-0 goofy_pike[95187]: }
Nov 22 05:26:04 compute-0 systemd[1]: libpod-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope: Deactivated successfully.
Nov 22 05:26:04 compute-0 podman[95196]: 2025-11-22 05:26:04.379432247 +0000 UTC m=+0.023605973 container died 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-25f90a4ae5e496c6cc65fb3a1a0a9522804ece397e3e7a0fb621c5e31bf7843f-merged.mount: Deactivated successfully.
Nov 22 05:26:04 compute-0 podman[95196]: 2025-11-22 05:26:04.433849419 +0000 UTC m=+0.078023115 container remove 06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_pike, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:04 compute-0 systemd[1]: libpod-conmon-06a8f45ce958e146dc4902ec69d4d9ba64d84a2bd4951cbf93179403e69b0b0d.scope: Deactivated successfully.
Nov 22 05:26:04 compute-0 sudo[95067]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:04 compute-0 sudo[95211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:04 compute-0 sudo[95211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:04 compute-0 sudo[95211]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:04 compute-0 sudo[95236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:04 compute-0 sudo[95236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:04 compute-0 sudo[95236]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:04 compute-0 sudo[95261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:04 compute-0 sudo[95261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:04 compute-0 sudo[95261]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:04 compute-0 sudo[95286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:26:04 compute-0 sudo[95286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:05 compute-0 ceph-mon[75840]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 879 MiB used, 59 GiB / 60 GiB avail
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.14160556 +0000 UTC m=+0.049775768 container create 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:26:05 compute-0 systemd[1]: Started libpod-conmon-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope.
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.113805175 +0000 UTC m=+0.021975443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.23345921 +0000 UTC m=+0.141629408 container init 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.245071066 +0000 UTC m=+0.153241284 container start 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:05 compute-0 angry_goldstine[95368]: 167 167
Nov 22 05:26:05 compute-0 systemd[1]: libpod-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope: Deactivated successfully.
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.251037193 +0000 UTC m=+0.159207471 container attach 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.25186165 +0000 UTC m=+0.160031858 container died 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a93b69e67ed9b1d214c0d2bb6b24f7d5ce479f3bbe7ac96d479521fdb636acb-merged.mount: Deactivated successfully.
Nov 22 05:26:05 compute-0 podman[95351]: 2025-11-22 05:26:05.293056865 +0000 UTC m=+0.201227063 container remove 6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:26:05 compute-0 systemd[1]: libpod-conmon-6090a36dcd522acd60a9a6164418fb18f25a6bfaa188846715526063b8c0fc64.scope: Deactivated successfully.
Nov 22 05:26:05 compute-0 podman[95391]: 2025-11-22 05:26:05.527595916 +0000 UTC m=+0.048153997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:05 compute-0 podman[95391]: 2025-11-22 05:26:05.779103919 +0000 UTC m=+0.299661960 container create c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:26:05 compute-0 systemd[1]: Started libpod-conmon-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope.
Nov 22 05:26:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:05 compute-0 podman[95391]: 2025-11-22 05:26:05.897176744 +0000 UTC m=+0.417734755 container init c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:26:05 compute-0 podman[95391]: 2025-11-22 05:26:05.904441204 +0000 UTC m=+0.424999215 container start c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:26:05 compute-0 podman[95391]: 2025-11-22 05:26:05.910896396 +0000 UTC m=+0.431454387 container attach c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:26:06 compute-0 hopeful_cori[95407]: {
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_id": 1,
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "type": "bluestore"
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     },
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_id": 2,
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "type": "bluestore"
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     },
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_id": 0,
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:         "type": "bluestore"
Nov 22 05:26:06 compute-0 hopeful_cori[95407]:     }
Nov 22 05:26:06 compute-0 hopeful_cori[95407]: }
Nov 22 05:26:06 compute-0 systemd[1]: libpod-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Deactivated successfully.
Nov 22 05:26:06 compute-0 podman[95391]: 2025-11-22 05:26:06.968943479 +0000 UTC m=+1.489501480 container died c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:06 compute-0 systemd[1]: libpod-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Consumed 1.069s CPU time.
Nov 22 05:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-14004ac149d08afa1632df8ac696e921aad9bcdd938d542f175475e47cd56c78-merged.mount: Deactivated successfully.
Nov 22 05:26:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:07 compute-0 ceph-mon[75840]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:07 compute-0 podman[95391]: 2025-11-22 05:26:07.037015211 +0000 UTC m=+1.557573232 container remove c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:07 compute-0 systemd[1]: libpod-conmon-c827fee825c386c9daddf7d5eafadfdde3b82de64896e80255f12e0d9fd5c42c.scope: Deactivated successfully.
Nov 22 05:26:07 compute-0 sudo[95286]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:07 compute-0 sudo[95453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:07 compute-0 sudo[95453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 sudo[95453]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 sudo[95478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:26:07 compute-0 sudo[95478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 sudo[95478]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 sudo[95503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:07 compute-0 sudo[95503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 sudo[95503]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 sudo[95528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:07 compute-0 sudo[95528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 sudo[95528]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 sudo[95553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:07 compute-0 sudo[95553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 sudo[95553]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:07 compute-0 sudo[95578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:26:07 compute-0 sudo[95578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:08 compute-0 podman[95677]: 2025-11-22 05:26:08.035122967 +0000 UTC m=+0.049454467 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:26:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:08 compute-0 podman[95677]: 2025-11-22 05:26:08.133910616 +0000 UTC m=+0.148242096 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:26:08 compute-0 sudo[95578]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:08 compute-0 sudo[95798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:08 compute-0 sudo[95798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:08 compute-0 sudo[95798]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:08 compute-0 sudo[95823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:08 compute-0 sudo[95823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:08 compute-0 sudo[95823]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:08 compute-0 sudo[95848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:08 compute-0 sudo[95848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:08 compute-0 sudo[95848]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:08 compute-0 sudo[95873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:26:08 compute-0 sudo[95873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:09 compute-0 ceph-mon[75840]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:09 compute-0 sudo[95873]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e567e22d-c5d8-47b3-98de-df7b2d5b106c does not exist
Nov 22 05:26:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ca7fb3da-7c92-428b-afd5-6bef58d49637 does not exist
Nov 22 05:26:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 35b26a0d-eb33-4e7d-8de4-7099bd9052ba does not exist
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:09 compute-0 sudo[95928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:09 compute-0 sudo[95928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:09 compute-0 sudo[95928]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:09 compute-0 sudo[95953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:09 compute-0 sudo[95953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:09 compute-0 sudo[95953]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:09 compute-0 sudo[95978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:09 compute-0 sudo[95978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:09 compute-0 sudo[95978]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:09 compute-0 sudo[96003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:26:09 compute-0 sudo[96003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.013036114 +0000 UTC m=+0.035061424 container create 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:10 compute-0 systemd[1]: Started libpod-conmon-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope.
Nov 22 05:26:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:09.998081883 +0000 UTC m=+0.020107203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.096042246 +0000 UTC m=+0.118067586 container init 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.102569182 +0000 UTC m=+0.124594502 container start 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.105988809 +0000 UTC m=+0.128014219 container attach 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:10 compute-0 ecstatic_feynman[96084]: 167 167
Nov 22 05:26:10 compute-0 systemd[1]: libpod-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope: Deactivated successfully.
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.108032373 +0000 UTC m=+0.130057693 container died 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-30de82f8eeb63aa7dab4ea686330c3ace5f0dc962071fe34efd793fb8eeacd51-merged.mount: Deactivated successfully.
Nov 22 05:26:10 compute-0 podman[96067]: 2025-11-22 05:26:10.144974385 +0000 UTC m=+0.166999685 container remove 9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:10 compute-0 systemd[1]: libpod-conmon-9a3d3e0ffbec217e97be9bc08d81b1c62d0b1aa86a5b983660a18f93e2177aba.scope: Deactivated successfully.
Nov 22 05:26:10 compute-0 podman[96106]: 2025-11-22 05:26:10.336774511 +0000 UTC m=+0.072013887 container create 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:26:10 compute-0 systemd[1]: Started libpod-conmon-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope.
Nov 22 05:26:10 compute-0 podman[96106]: 2025-11-22 05:26:10.305498477 +0000 UTC m=+0.040737933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:10 compute-0 podman[96106]: 2025-11-22 05:26:10.437725807 +0000 UTC m=+0.172965293 container init 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:10 compute-0 podman[96106]: 2025-11-22 05:26:10.449717245 +0000 UTC m=+0.184956631 container start 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:10 compute-0 podman[96106]: 2025-11-22 05:26:10.45400975 +0000 UTC m=+0.189249216 container attach 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:11 compute-0 ceph-mon[75840]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:11 compute-0 musing_galois[96122]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:26:11 compute-0 musing_galois[96122]: --> relative data size: 1.0
Nov 22 05:26:11 compute-0 musing_galois[96122]: --> All data devices are unavailable
Nov 22 05:26:11 compute-0 systemd[1]: libpod-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Deactivated successfully.
Nov 22 05:26:11 compute-0 systemd[1]: libpod-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Consumed 1.077s CPU time.
Nov 22 05:26:11 compute-0 podman[96106]: 2025-11-22 05:26:11.576220651 +0000 UTC m=+1.311460027 container died 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccc575bd8e140e1e83a17e66ea1703a7aa9679685f773602728c5530f6f0aea8-merged.mount: Deactivated successfully.
Nov 22 05:26:12 compute-0 podman[96106]: 2025-11-22 05:26:12.491236253 +0000 UTC m=+2.226475629 container remove 3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:26:12 compute-0 sudo[96003]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:12 compute-0 systemd[1]: libpod-conmon-3a021cdc55bcce3e1fee7cf201787775221593ea1806606b3cf70f99e8df00f1.scope: Deactivated successfully.
Nov 22 05:26:12 compute-0 sudo[96164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:12 compute-0 sudo[96164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:12 compute-0 sudo[96164]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:12 compute-0 sudo[96191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:12 compute-0 sudo[96191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:12 compute-0 sudo[96191]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:12 compute-0 sudo[96216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:12 compute-0 sudo[96216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:12 compute-0 sudo[96216]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:12 compute-0 sudo[96241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:26:12 compute-0 sudo[96241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.209000939 +0000 UTC m=+0.046221136 container create 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:26:13 compute-0 systemd[1]: Started libpod-conmon-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope.
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.189707501 +0000 UTC m=+0.026927698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.30279099 +0000 UTC m=+0.140011187 container init 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.314183598 +0000 UTC m=+0.151403775 container start 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.317796431 +0000 UTC m=+0.155016648 container attach 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:26:13 compute-0 naughty_noether[96320]: 167 167
Nov 22 05:26:13 compute-0 systemd[1]: libpod-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope: Deactivated successfully.
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.322730657 +0000 UTC m=+0.159950874 container died 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:13 compute-0 ceph-mon[75840]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a22abe11b9d0008be9ede1805424b7471c5f57ddc00e6a34f947d5abe6d6be1-merged.mount: Deactivated successfully.
Nov 22 05:26:13 compute-0 podman[96305]: 2025-11-22 05:26:13.376573361 +0000 UTC m=+0.213793548 container remove 1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:13 compute-0 systemd[1]: libpod-conmon-1df56c39242261ee3147b78915fa8471b5490dd1c35d3e76d8fbee77e64c17ff.scope: Deactivated successfully.
Nov 22 05:26:13 compute-0 podman[96344]: 2025-11-22 05:26:13.603848643 +0000 UTC m=+0.067398422 container create 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:13 compute-0 systemd[1]: Started libpod-conmon-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope.
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:13 compute-0 podman[96344]: 2025-11-22 05:26:13.575194491 +0000 UTC m=+0.038744320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:13 compute-0 podman[96344]: 2025-11-22 05:26:13.740815522 +0000 UTC m=+0.204365291 container init 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:26:13 compute-0 podman[96344]: 2025-11-22 05:26:13.753063388 +0000 UTC m=+0.216613167 container start 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:13 compute-0 podman[96344]: 2025-11-22 05:26:13.778508478 +0000 UTC m=+0.242058237 container attach 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:26:14 compute-0 ceph-mon[75840]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:14 compute-0 laughing_chaum[96360]: {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     "0": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "devices": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "/dev/loop3"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             ],
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_name": "ceph_lv0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_size": "21470642176",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "name": "ceph_lv0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "tags": {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.crush_device_class": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.encrypted": "0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_id": "0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.vdo": "0"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             },
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "vg_name": "ceph_vg0"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         }
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     ],
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     "1": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "devices": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "/dev/loop4"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             ],
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_name": "ceph_lv1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_size": "21470642176",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "name": "ceph_lv1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "tags": {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.crush_device_class": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.encrypted": "0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_id": "1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.vdo": "0"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             },
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "vg_name": "ceph_vg1"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         }
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     ],
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     "2": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "devices": [
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "/dev/loop5"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             ],
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_name": "ceph_lv2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_size": "21470642176",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "name": "ceph_lv2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "tags": {
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.crush_device_class": "",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.encrypted": "0",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osd_id": "2",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:                 "ceph.vdo": "0"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             },
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "type": "block",
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:             "vg_name": "ceph_vg2"
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:         }
Nov 22 05:26:14 compute-0 laughing_chaum[96360]:     ]
Nov 22 05:26:14 compute-0 laughing_chaum[96360]: }
Nov 22 05:26:14 compute-0 systemd[1]: libpod-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope: Deactivated successfully.
Nov 22 05:26:14 compute-0 podman[96344]: 2025-11-22 05:26:14.589969982 +0000 UTC m=+1.053519761 container died 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a0f7ed9c505ea404011d6c71f611c7c6381d698305ac0d38e7ec231c4256d8a-merged.mount: Deactivated successfully.
Nov 22 05:26:14 compute-0 sshd-session[96188]: Connection closed by authenticating user root 123.253.22.30 port 45470 [preauth]
Nov 22 05:26:14 compute-0 podman[96344]: 2025-11-22 05:26:14.679173099 +0000 UTC m=+1.142722888 container remove 5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:26:14 compute-0 systemd[1]: libpod-conmon-5d3d3a61658380736e5db44b59dde461fc4fccbfb05b745351c13be50748a558.scope: Deactivated successfully.
Nov 22 05:26:14 compute-0 sudo[96241]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:14 compute-0 sudo[96382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:14 compute-0 sudo[96382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:14 compute-0 sudo[96382]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:14 compute-0 sudo[96407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:14 compute-0 sudo[96407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:14 compute-0 sudo[96407]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:14 compute-0 sudo[96432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:14 compute-0 sudo[96432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:14 compute-0 sudo[96432]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:15 compute-0 sudo[96457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:26:15 compute-0 sudo[96457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.406967029 +0000 UTC m=+0.054261137 container create b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:15 compute-0 systemd[1]: Started libpod-conmon-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope.
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.382003535 +0000 UTC m=+0.029297683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.499347546 +0000 UTC m=+0.146641684 container init b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.50646002 +0000 UTC m=+0.153754148 container start b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.510111665 +0000 UTC m=+0.157405773 container attach b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:26:15 compute-0 great_gagarin[96540]: 167 167
Nov 22 05:26:15 compute-0 systemd[1]: libpod-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope: Deactivated successfully.
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.511850889 +0000 UTC m=+0.159144998 container died b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-786a7cc9bc52571cdc8d2b0422b9fa0824e54db84eedd2e6d2ae775353f4478a-merged.mount: Deactivated successfully.
Nov 22 05:26:15 compute-0 podman[96523]: 2025-11-22 05:26:15.562762822 +0000 UTC m=+0.210056900 container remove b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:26:15 compute-0 systemd[1]: libpod-conmon-b970eedbdcd96870cba88dce1dfd77840f2945dd75c879025cf8025d2699155c.scope: Deactivated successfully.
Nov 22 05:26:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:15 compute-0 podman[96563]: 2025-11-22 05:26:15.755525507 +0000 UTC m=+0.056344474 container create e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:26:15 compute-0 systemd[1]: Started libpod-conmon-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope.
Nov 22 05:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:15 compute-0 podman[96563]: 2025-11-22 05:26:15.726281137 +0000 UTC m=+0.027100094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:15 compute-0 podman[96563]: 2025-11-22 05:26:15.852624353 +0000 UTC m=+0.153443360 container init e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:15 compute-0 podman[96563]: 2025-11-22 05:26:15.866539721 +0000 UTC m=+0.167358688 container start e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:26:15 compute-0 podman[96563]: 2025-11-22 05:26:15.870917959 +0000 UTC m=+0.171736936 container attach e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:26:16 compute-0 ceph-mon[75840]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:16 compute-0 confident_nash[96579]: {
Nov 22 05:26:16 compute-0 confident_nash[96579]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_id": 1,
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "type": "bluestore"
Nov 22 05:26:16 compute-0 confident_nash[96579]:     },
Nov 22 05:26:16 compute-0 confident_nash[96579]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_id": 2,
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "type": "bluestore"
Nov 22 05:26:16 compute-0 confident_nash[96579]:     },
Nov 22 05:26:16 compute-0 confident_nash[96579]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_id": 0,
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:16 compute-0 confident_nash[96579]:         "type": "bluestore"
Nov 22 05:26:16 compute-0 confident_nash[96579]:     }
Nov 22 05:26:16 compute-0 confident_nash[96579]: }
Nov 22 05:26:16 compute-0 systemd[1]: libpod-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope: Deactivated successfully.
Nov 22 05:26:16 compute-0 podman[96563]: 2025-11-22 05:26:16.809673407 +0000 UTC m=+1.110492344 container died e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9dedf4e10831011bc1a45c75e61b18dcb23df8d7df40bb39647a1dd7b69519b-merged.mount: Deactivated successfully.
Nov 22 05:26:16 compute-0 podman[96563]: 2025-11-22 05:26:16.8638001 +0000 UTC m=+1.164619027 container remove e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:26:16 compute-0 systemd[1]: libpod-conmon-e8258bee6456d19a3a109aeee4f988c5b7868a9343d51228865f2a1dabcca7c8.scope: Deactivated successfully.
Nov 22 05:26:16 compute-0 sudo[96457]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:16 compute-0 sudo[96622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:16 compute-0 sudo[96622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:16 compute-0 sudo[96622]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:17 compute-0 sudo[96647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:26:17 compute-0 sudo[96647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:17 compute-0 sudo[96647]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:18 compute-0 sudo[96695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ierogzbevsygokgyognpzacdiqfeuwzu ; /usr/bin/python3'
Nov 22 05:26:18 compute-0 sudo[96695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:18 compute-0 python3[96697]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:18 compute-0 podman[96699]: 2025-11-22 05:26:18.579453305 +0000 UTC m=+0.049394775 container create f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:18 compute-0 systemd[1]: Started libpod-conmon-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope.
Nov 22 05:26:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:18 compute-0 podman[96699]: 2025-11-22 05:26:18.554430848 +0000 UTC m=+0.024372308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:18 compute-0 podman[96699]: 2025-11-22 05:26:18.672960408 +0000 UTC m=+0.142901868 container init f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 05:26:18 compute-0 podman[96699]: 2025-11-22 05:26:18.680253817 +0000 UTC m=+0.150195167 container start f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:18 compute-0 podman[96699]: 2025-11-22 05:26:18.684838292 +0000 UTC m=+0.154779672 container attach f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:26:18 compute-0 ceph-mon[75840]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 05:26:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306877251' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:26:19 compute-0 competent_keller[96716]: 
Nov 22 05:26:19 compute-0 competent_keller[96716]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":142,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502763520,"bytes_avail":63909163008,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T05:25:45.657644+0000","services":{}},"progress_events":{}}
Nov 22 05:26:19 compute-0 systemd[1]: libpod-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope: Deactivated successfully.
Nov 22 05:26:19 compute-0 podman[96699]: 2025-11-22 05:26:19.296288871 +0000 UTC m=+0.766230231 container died f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e92abe679985ed099a1b8e75a475ba955e149651d378e5fab4430a7b32da1ea1-merged.mount: Deactivated successfully.
Nov 22 05:26:19 compute-0 podman[96699]: 2025-11-22 05:26:19.418001731 +0000 UTC m=+0.887943091 container remove f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4 (image=quay.io/ceph/ceph:v18, name=competent_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:19 compute-0 systemd[1]: libpod-conmon-f479a73662855ca859f8777dad51455cb293984653575442fca17aa87bf8dfd4.scope: Deactivated successfully.
Nov 22 05:26:19 compute-0 sudo[96695]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:19 compute-0 sudo[96777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scpqzvtmecdddfdgnvekwvgtmjijgpaf ; /usr/bin/python3'
Nov 22 05:26:19 compute-0 sudo[96777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:19 compute-0 python3[96779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/306877251' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:26:19 compute-0 podman[96780]: 2025-11-22 05:26:19.966647685 +0000 UTC m=+0.049560220 container create 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:19 compute-0 systemd[1]: Started libpod-conmon-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope.
Nov 22 05:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:20 compute-0 podman[96780]: 2025-11-22 05:26:19.944007533 +0000 UTC m=+0.026920088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:20 compute-0 podman[96780]: 2025-11-22 05:26:20.043235215 +0000 UTC m=+0.126147760 container init 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:26:20 compute-0 podman[96780]: 2025-11-22 05:26:20.04975568 +0000 UTC m=+0.132668225 container start 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:20 compute-0 podman[96780]: 2025-11-22 05:26:20.054755737 +0000 UTC m=+0.137668272 container attach 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:26:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 22 05:26:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Nov 22 05:26:20 compute-0 tender_leakey[96795]: pool 'vms' created
Nov 22 05:26:20 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Nov 22 05:26:20 compute-0 ceph-mon[75840]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:20 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:20 compute-0 systemd[1]: libpod-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope: Deactivated successfully.
Nov 22 05:26:20 compute-0 podman[96780]: 2025-11-22 05:26:20.993965501 +0000 UTC m=+1.076878046 container died 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-22784bf8c7499d4088debc0e3bbb3a5e78581f0d9269a7d298d4e6ebe958fc8b-merged.mount: Deactivated successfully.
Nov 22 05:26:21 compute-0 podman[96780]: 2025-11-22 05:26:21.038467721 +0000 UTC m=+1.121380246 container remove 05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789 (image=quay.io/ceph/ceph:v18, name=tender_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:26:21 compute-0 systemd[1]: libpod-conmon-05800984fc93f8dad2aef1e9f8e8a03124c8465b53a00706ef6c17fb59387789.scope: Deactivated successfully.
Nov 22 05:26:21 compute-0 sudo[96777]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:21 compute-0 sudo[96857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segrsyqzxvckdkgtxrtdzwstbdwohaag ; /usr/bin/python3'
Nov 22 05:26:21 compute-0 sudo[96857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:21 compute-0 python3[96859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:21 compute-0 podman[96860]: 2025-11-22 05:26:21.447411119 +0000 UTC m=+0.063974814 container create 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:21 compute-0 systemd[1]: Started libpod-conmon-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope.
Nov 22 05:26:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:21 compute-0 podman[96860]: 2025-11-22 05:26:21.508558002 +0000 UTC m=+0.125121727 container init 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:21 compute-0 podman[96860]: 2025-11-22 05:26:21.513700094 +0000 UTC m=+0.130263789 container start 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:26:21 compute-0 podman[96860]: 2025-11-22 05:26:21.420311426 +0000 UTC m=+0.036875211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:21 compute-0 podman[96860]: 2025-11-22 05:26:21.517093761 +0000 UTC m=+0.133657486 container attach 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:26:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v58: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 22 05:26:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 22 05:26:21 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/754716600' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:21 compute-0 ceph-mon[75840]: osdmap e18: 3 total, 3 up, 3 in
Nov 22 05:26:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 22 05:26:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 22 05:26:22 compute-0 ceph-mon[75840]: pgmap v58: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:22 compute-0 ceph-mon[75840]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:22 compute-0 ceph-mon[75840]: osdmap e19: 3 total, 3 up, 3 in
Nov 22 05:26:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 22 05:26:22 compute-0 loving_babbage[96875]: pool 'volumes' created
Nov 22 05:26:22 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 22 05:26:23 compute-0 systemd[1]: libpod-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope: Deactivated successfully.
Nov 22 05:26:23 compute-0 podman[96860]: 2025-11-22 05:26:23.024705519 +0000 UTC m=+1.641269264 container died 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ba5d876ead03d9ebf57e25109ada2cdbd9652fff6d52a8a65911711c0dd060-merged.mount: Deactivated successfully.
Nov 22 05:26:23 compute-0 podman[96860]: 2025-11-22 05:26:23.071128539 +0000 UTC m=+1.687692244 container remove 4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769 (image=quay.io/ceph/ceph:v18, name=loving_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:26:23 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:23 compute-0 systemd[1]: libpod-conmon-4cca244dd48b81e38196b309660f90da0d703b83f1fbf2bb558b42b52591f769.scope: Deactivated successfully.
Nov 22 05:26:23 compute-0 sudo[96857]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:23 compute-0 sudo[96937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffngqrwkfitfabapafllibxqnhjkyhgy ; /usr/bin/python3'
Nov 22 05:26:23 compute-0 sudo[96937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:23 compute-0 python3[96939]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:23 compute-0 podman[96940]: 2025-11-22 05:26:23.497641041 +0000 UTC m=+0.042696765 container create b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:23 compute-0 systemd[1]: Started libpod-conmon-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope.
Nov 22 05:26:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:23 compute-0 podman[96940]: 2025-11-22 05:26:23.479337275 +0000 UTC m=+0.024393019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:23 compute-0 podman[96940]: 2025-11-22 05:26:23.584465273 +0000 UTC m=+0.129521027 container init b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:26:23 compute-0 podman[96940]: 2025-11-22 05:26:23.594763997 +0000 UTC m=+0.139819721 container start b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:26:23 compute-0 podman[96940]: 2025-11-22 05:26:23.598382491 +0000 UTC m=+0.143438215 container attach b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:26:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v61: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 22 05:26:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/746340552' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:24 compute-0 ceph-mon[75840]: osdmap e20: 3 total, 3 up, 3 in
Nov 22 05:26:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 22 05:26:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 22 05:26:24 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 22 05:26:25 compute-0 ceph-mon[75840]: pgmap v61: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:25 compute-0 ceph-mon[75840]: osdmap e21: 3 total, 3 up, 3 in
Nov 22 05:26:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 22 05:26:25 compute-0 elated_faraday[96955]: pool 'backups' created
Nov 22 05:26:25 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 22 05:26:25 compute-0 systemd[1]: libpod-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope: Deactivated successfully.
Nov 22 05:26:25 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:25 compute-0 podman[96982]: 2025-11-22 05:26:25.097657648 +0000 UTC m=+0.032039230 container died b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ed3cdf3bf84c59eed85f6fba053134027d6d099778c2804104b2979b1161172-merged.mount: Deactivated successfully.
Nov 22 05:26:25 compute-0 podman[96982]: 2025-11-22 05:26:25.155512468 +0000 UTC m=+0.089894080 container remove b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9 (image=quay.io/ceph/ceph:v18, name=elated_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:26:25 compute-0 systemd[1]: libpod-conmon-b0b771885026c129f84ce4e8715f732db5200fe8cb8c817471b3276523d554a9.scope: Deactivated successfully.
Nov 22 05:26:25 compute-0 sudo[96937]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:25 compute-0 sudo[97020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeurwvflydyjaywlzmjbhyunyqorcxpj ; /usr/bin/python3'
Nov 22 05:26:25 compute-0 sudo[97020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:25 compute-0 python3[97022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:25 compute-0 podman[97023]: 2025-11-22 05:26:25.605056573 +0000 UTC m=+0.062278260 container create b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:25 compute-0 systemd[1]: Started libpod-conmon-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope.
Nov 22 05:26:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v64: 4 pgs: 2 active+clean, 1 unknown, 1 creating+peering; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:25 compute-0 podman[97023]: 2025-11-22 05:26:25.577509796 +0000 UTC m=+0.034731563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:25 compute-0 podman[97023]: 2025-11-22 05:26:25.711933896 +0000 UTC m=+0.169155643 container init b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:25 compute-0 podman[97023]: 2025-11-22 05:26:25.723764169 +0000 UTC m=+0.180985886 container start b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:26:25 compute-0 podman[97023]: 2025-11-22 05:26:25.728212528 +0000 UTC m=+0.185434285 container attach b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 22 05:26:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/928416810' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:26 compute-0 ceph-mon[75840]: osdmap e22: 3 total, 3 up, 3 in
Nov 22 05:26:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 22 05:26:26 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 22 05:26:26 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:27 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 22 05:26:27 compute-0 ceph-mon[75840]: pgmap v64: 4 pgs: 2 active+clean, 1 unknown, 1 creating+peering; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:27 compute-0 ceph-mon[75840]: osdmap e23: 3 total, 3 up, 3 in
Nov 22 05:26:27 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:27 compute-0 ceph-mon[75840]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:27 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 22 05:26:27 compute-0 reverent_vaughan[97039]: pool 'images' created
Nov 22 05:26:27 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 22 05:26:27 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:27 compute-0 systemd[1]: libpod-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope: Deactivated successfully.
Nov 22 05:26:27 compute-0 podman[97023]: 2025-11-22 05:26:27.087139679 +0000 UTC m=+1.544361366 container died b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-86637ab9764a8ab82883b18a973a3ad4130c47ae007f16c5a101ea13aca12e77-merged.mount: Deactivated successfully.
Nov 22 05:26:27 compute-0 podman[97023]: 2025-11-22 05:26:27.144603677 +0000 UTC m=+1.601825364 container remove b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa (image=quay.io/ceph/ceph:v18, name=reverent_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:26:27 compute-0 systemd[1]: libpod-conmon-b2cc65cb1bcb5e78ef032622946cac811a03559f1d69ab624fb364dcd1a3dbfa.scope: Deactivated successfully.
Nov 22 05:26:27 compute-0 sudo[97020]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:27 compute-0 sudo[97101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkeythtrgdgxwkegvyqcjxwjmdzpdjuh ; /usr/bin/python3'
Nov 22 05:26:27 compute-0 sudo[97101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:27 compute-0 python3[97103]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:27 compute-0 podman[97104]: 2025-11-22 05:26:27.545544013 +0000 UTC m=+0.048784287 container create cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:27 compute-0 systemd[1]: Started libpod-conmon-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope.
Nov 22 05:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:27 compute-0 podman[97104]: 2025-11-22 05:26:27.522348852 +0000 UTC m=+0.025589176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:27 compute-0 podman[97104]: 2025-11-22 05:26:27.62334937 +0000 UTC m=+0.126589724 container init cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:27 compute-0 podman[97104]: 2025-11-22 05:26:27.634557303 +0000 UTC m=+0.137797607 container start cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:26:27 compute-0 podman[97104]: 2025-11-22 05:26:27.63859483 +0000 UTC m=+0.141835124 container attach cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v67: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 22 05:26:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 22 05:26:28 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 22 05:26:28 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2136351420' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:28 compute-0 ceph-mon[75840]: osdmap e24: 3 total, 3 up, 3 in
Nov 22 05:26:28 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [2] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 22 05:26:29 compute-0 ceph-mon[75840]: pgmap v67: 5 pgs: 3 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:29 compute-0 ceph-mon[75840]: osdmap e25: 3 total, 3 up, 3 in
Nov 22 05:26:29 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 22 05:26:29 compute-0 reverent_gould[97119]: pool 'cephfs.cephfs.meta' created
Nov 22 05:26:29 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 22 05:26:29 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:29 compute-0 systemd[1]: libpod-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope: Deactivated successfully.
Nov 22 05:26:29 compute-0 podman[97104]: 2025-11-22 05:26:29.11862114 +0000 UTC m=+1.621861464 container died cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-64e8e15133c79f9e488af66f9cd4ecde192455e7d6a522fee39e64f7c584f63a-merged.mount: Deactivated successfully.
Nov 22 05:26:29 compute-0 podman[97104]: 2025-11-22 05:26:29.182885393 +0000 UTC m=+1.686125667 container remove cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae (image=quay.io/ceph/ceph:v18, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:26:29 compute-0 systemd[1]: libpod-conmon-cc38d5fe01499e6f5086f972fc79bda60d054ddc55b6af3cad037f5cbc16daae.scope: Deactivated successfully.
Nov 22 05:26:29 compute-0 sudo[97101]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:29 compute-0 sudo[97183]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhxbcodpdjjtntslheeicmsqlbofqbjb ; /usr/bin/python3'
Nov 22 05:26:29 compute-0 sudo[97183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:29 compute-0 python3[97185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:29 compute-0 podman[97186]: 2025-11-22 05:26:29.602070783 +0000 UTC m=+0.076438956 container create 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:29 compute-0 systemd[1]: Started libpod-conmon-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope.
Nov 22 05:26:29 compute-0 podman[97186]: 2025-11-22 05:26:29.573019929 +0000 UTC m=+0.047388152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v70: 6 pgs: 3 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:29 compute-0 podman[97186]: 2025-11-22 05:26:29.699144648 +0000 UTC m=+0.173512811 container init 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:29 compute-0 podman[97186]: 2025-11-22 05:26:29.71001244 +0000 UTC m=+0.184380573 container start 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:26:29 compute-0 podman[97186]: 2025-11-22 05:26:29.713881072 +0000 UTC m=+0.188249245 container attach 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 22 05:26:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1650626349' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:30 compute-0 ceph-mon[75840]: osdmap e26: 3 total, 3 up, 3 in
Nov 22 05:26:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 22 05:26:30 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 22 05:26:30 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 27 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [0] r=0 lpr=26 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 05:26:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 22 05:26:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 22 05:26:31 compute-0 dreamy_chandrasekhar[97202]: pool 'cephfs.cephfs.data' created
Nov 22 05:26:31 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 22 05:26:31 compute-0 ceph-mon[75840]: pgmap v70: 6 pgs: 3 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:31 compute-0 ceph-mon[75840]: osdmap e27: 3 total, 3 up, 3 in
Nov 22 05:26:31 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 05:26:31 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=0/0 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:31 compute-0 systemd[1]: libpod-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope: Deactivated successfully.
Nov 22 05:26:31 compute-0 podman[97186]: 2025-11-22 05:26:31.14903601 +0000 UTC m=+1.623404143 container died 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3881393720dca2dbf81e56eb2514bc333c5d604e9f88498ce90b198030688fac-merged.mount: Deactivated successfully.
Nov 22 05:26:31 compute-0 podman[97186]: 2025-11-22 05:26:31.189039558 +0000 UTC m=+1.663407681 container remove 4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d (image=quay.io/ceph/ceph:v18, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:26:31 compute-0 systemd[1]: libpod-conmon-4940b18119fa56988bfa45ff037051ee51462a2da6df64eb749c0f02796cd78d.scope: Deactivated successfully.
Nov 22 05:26:31 compute-0 sudo[97183]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:31 compute-0 sudo[97264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgtwyedmtsvrygqkkwzjgovqykuffgfv ; /usr/bin/python3'
Nov 22 05:26:31 compute-0 sudo[97264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:31 compute-0 python3[97266]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:31 compute-0 podman[97267]: 2025-11-22 05:26:31.592967049 +0000 UTC m=+0.049349845 container create b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:31 compute-0 systemd[1]: Started libpod-conmon-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope.
Nov 22 05:26:31 compute-0 podman[97267]: 2025-11-22 05:26:31.571261735 +0000 UTC m=+0.027644531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:31 compute-0 podman[97267]: 2025-11-22 05:26:31.69345108 +0000 UTC m=+0.149833866 container init b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:31 compute-0 podman[97267]: 2025-11-22 05:26:31.700528843 +0000 UTC m=+0.156911629 container start b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:26:31 compute-0 podman[97267]: 2025-11-22 05:26:31.703716144 +0000 UTC m=+0.160098960 container attach b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:32 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 22 05:26:32 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2997890890' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 05:26:32 compute-0 ceph-mon[75840]: osdmap e28: 3 total, 3 up, 3 in
Nov 22 05:26:32 compute-0 ceph-mon[75840]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 22 05:26:32 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 22 05:26:32 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=0/0 les/c/f=0/0/0 sis=28) [1] r=0 lpr=28 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 22 05:26:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 05:26:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 22 05:26:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 05:26:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 22 05:26:33 compute-0 unruffled_proskuriakova[97282]: enabled application 'rbd' on pool 'vms'
Nov 22 05:26:33 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 22 05:26:33 compute-0 ceph-mon[75840]: pgmap v73: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:33 compute-0 ceph-mon[75840]: osdmap e29: 3 total, 3 up, 3 in
Nov 22 05:26:33 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 05:26:33 compute-0 systemd[1]: libpod-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope: Deactivated successfully.
Nov 22 05:26:33 compute-0 podman[97267]: 2025-11-22 05:26:33.181147002 +0000 UTC m=+1.637529788 container died b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-846864df1f0da42b82ccea21a853f7f795473def5ccb020e6563dab7cf00a16a-merged.mount: Deactivated successfully.
Nov 22 05:26:33 compute-0 podman[97267]: 2025-11-22 05:26:33.238193748 +0000 UTC m=+1.694576544 container remove b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2 (image=quay.io/ceph/ceph:v18, name=unruffled_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:33 compute-0 systemd[1]: libpod-conmon-b51bac7c7492b5e4d733e8dbf2d72ff5eeddb4b0705fb0f912da1eee42e603f2.scope: Deactivated successfully.
Nov 22 05:26:33 compute-0 sudo[97264]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:33 compute-0 sudo[97342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xawnwvqgxrnjnkrucclpctdnnyfyfxmx ; /usr/bin/python3'
Nov 22 05:26:33 compute-0 sudo[97342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:33 compute-0 python3[97344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:33 compute-0 podman[97345]: 2025-11-22 05:26:33.599318901 +0000 UTC m=+0.056640214 container create 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:33 compute-0 systemd[1]: Started libpod-conmon-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope.
Nov 22 05:26:33 compute-0 podman[97345]: 2025-11-22 05:26:33.569051258 +0000 UTC m=+0.026372621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:33 compute-0 podman[97345]: 2025-11-22 05:26:33.701431044 +0000 UTC m=+0.158752327 container init 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:26:33 compute-0 podman[97345]: 2025-11-22 05:26:33.712419499 +0000 UTC m=+0.169740822 container start 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:33 compute-0 podman[97345]: 2025-11-22 05:26:33.716820637 +0000 UTC m=+0.174141950 container attach 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:26:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1118757798' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 05:26:34 compute-0 ceph-mon[75840]: osdmap e30: 3 total, 3 up, 3 in
Nov 22 05:26:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 22 05:26:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 05:26:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 22 05:26:35 compute-0 ceph-mon[75840]: pgmap v76: 7 pgs: 1 creating+peering, 5 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:35 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 05:26:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 05:26:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 22 05:26:35 compute-0 elastic_herschel[97360]: enabled application 'rbd' on pool 'volumes'
Nov 22 05:26:35 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 22 05:26:35 compute-0 systemd[1]: libpod-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope: Deactivated successfully.
Nov 22 05:26:35 compute-0 podman[97345]: 2025-11-22 05:26:35.219446319 +0000 UTC m=+1.676767682 container died 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dcbe4b20654eb2d95fe91962bc49567c5e0d976df175ea1d4414663f482f433-merged.mount: Deactivated successfully.
Nov 22 05:26:35 compute-0 podman[97345]: 2025-11-22 05:26:35.263520522 +0000 UTC m=+1.720841805 container remove 49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294 (image=quay.io/ceph/ceph:v18, name=elastic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:26:35 compute-0 systemd[1]: libpod-conmon-49f4204fa1f4fdfa8c45a2fec5a54b44e5abf568cf1ac5bff26d2baec3e4e294.scope: Deactivated successfully.
Nov 22 05:26:35 compute-0 sudo[97342]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:35 compute-0 sudo[97420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocdyjsuqfuknvacpfarylvnzjlfynzmk ; /usr/bin/python3'
Nov 22 05:26:35 compute-0 sudo[97420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:35 compute-0 python3[97422]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:35 compute-0 podman[97423]: 2025-11-22 05:26:35.685042905 +0000 UTC m=+0.075251808 container create 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:35 compute-0 systemd[1]: Started libpod-conmon-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope.
Nov 22 05:26:35 compute-0 podman[97423]: 2025-11-22 05:26:35.652220136 +0000 UTC m=+0.042429089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:35 compute-0 podman[97423]: 2025-11-22 05:26:35.772900386 +0000 UTC m=+0.163109309 container init 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:35 compute-0 podman[97423]: 2025-11-22 05:26:35.778076713 +0000 UTC m=+0.168285576 container start 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:26:35 compute-0 podman[97423]: 2025-11-22 05:26:35.78151821 +0000 UTC m=+0.171727123 container attach 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:26:36 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1523162647' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 05:26:36 compute-0 ceph-mon[75840]: osdmap e31: 3 total, 3 up, 3 in
Nov 22 05:26:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 22 05:26:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 05:26:37 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 22 05:26:37 compute-0 ceph-mon[75840]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:37 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 05:26:37 compute-0 ceph-mon[75840]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:37 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 05:26:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 22 05:26:37 compute-0 thirsty_hermann[97438]: enabled application 'rbd' on pool 'backups'
Nov 22 05:26:37 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 22 05:26:37 compute-0 systemd[1]: libpod-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope: Deactivated successfully.
Nov 22 05:26:37 compute-0 podman[97423]: 2025-11-22 05:26:37.251901161 +0000 UTC m=+1.642110084 container died 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ea67531ee2f1b9be1c25356e78f30031b78b80c33b54a1856b5e50a3fa2d37-merged.mount: Deactivated successfully.
Nov 22 05:26:37 compute-0 podman[97423]: 2025-11-22 05:26:37.306632824 +0000 UTC m=+1.696841687 container remove 94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97 (image=quay.io/ceph/ceph:v18, name=thirsty_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:37 compute-0 systemd[1]: libpod-conmon-94bcf55604eb1cbcdae88ef51bc926efc940980f5b996baaffd249634db08f97.scope: Deactivated successfully.
Nov 22 05:26:37 compute-0 sudo[97420]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:37 compute-0 sudo[97498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nticdakxjkmzogsdtxpbkyvrwlyvrwzw ; /usr/bin/python3'
Nov 22 05:26:37 compute-0 sudo[97498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:37 compute-0 python3[97500]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:37 compute-0 podman[97501]: 2025-11-22 05:26:37.776387146 +0000 UTC m=+0.067195126 container create 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:37 compute-0 systemd[1]: Started libpod-conmon-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope.
Nov 22 05:26:37 compute-0 podman[97501]: 2025-11-22 05:26:37.748858025 +0000 UTC m=+0.039666085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:37 compute-0 podman[97501]: 2025-11-22 05:26:37.887417349 +0000 UTC m=+0.178225389 container init 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:26:37 compute-0 podman[97501]: 2025-11-22 05:26:37.897708911 +0000 UTC m=+0.188516911 container start 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:37 compute-0 podman[97501]: 2025-11-22 05:26:37.901900096 +0000 UTC m=+0.192708096 container attach 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:26:38 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/943902334' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 05:26:38 compute-0 ceph-mon[75840]: osdmap e32: 3 total, 3 up, 3 in
Nov 22 05:26:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 22 05:26:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 05:26:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 22 05:26:39 compute-0 ceph-mon[75840]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:39 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 05:26:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 05:26:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 22 05:26:39 compute-0 dreamy_lehmann[97516]: enabled application 'rbd' on pool 'images'
Nov 22 05:26:39 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 22 05:26:39 compute-0 systemd[1]: libpod-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope: Deactivated successfully.
Nov 22 05:26:39 compute-0 podman[97501]: 2025-11-22 05:26:39.275681258 +0000 UTC m=+1.566489258 container died 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-89a2cfe0f42ec77eb4627dab1b6335ba9bef1bd4725c7444a2c68a598ef420e8-merged.mount: Deactivated successfully.
Nov 22 05:26:39 compute-0 podman[97501]: 2025-11-22 05:26:39.332341715 +0000 UTC m=+1.623149675 container remove 67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45 (image=quay.io/ceph/ceph:v18, name=dreamy_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:39 compute-0 systemd[1]: libpod-conmon-67d5ac5b7f4093a2ccdc9f2ba67dbf25097db2ca9c3c2bd42fb167f0d7320c45.scope: Deactivated successfully.
Nov 22 05:26:39 compute-0 sudo[97498]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:39 compute-0 sudo[97575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtymliphsywzibairgslljanynbbwjp ; /usr/bin/python3'
Nov 22 05:26:39 compute-0 sudo[97575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:39 compute-0 python3[97577]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:39 compute-0 podman[97578]: 2025-11-22 05:26:39.676711079 +0000 UTC m=+0.036201848 container create 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:39 compute-0 systemd[1]: Started libpod-conmon-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope.
Nov 22 05:26:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:39 compute-0 podman[97578]: 2025-11-22 05:26:39.744008246 +0000 UTC m=+0.103499005 container init 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:39 compute-0 podman[97578]: 2025-11-22 05:26:39.750904181 +0000 UTC m=+0.110394960 container start 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:39 compute-0 podman[97578]: 2025-11-22 05:26:39.658225152 +0000 UTC m=+0.017715941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:39 compute-0 podman[97578]: 2025-11-22 05:26:39.754886361 +0000 UTC m=+0.114377160 container attach 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:26:40 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2146846899' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 05:26:40 compute-0 ceph-mon[75840]: osdmap e33: 3 total, 3 up, 3 in
Nov 22 05:26:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 22 05:26:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 05:26:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 22 05:26:41 compute-0 ceph-mon[75840]: pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:41 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 05:26:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 05:26:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 22 05:26:41 compute-0 friendly_brown[97594]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 22 05:26:41 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 22 05:26:41 compute-0 systemd[1]: libpod-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope: Deactivated successfully.
Nov 22 05:26:41 compute-0 podman[97578]: 2025-11-22 05:26:41.315628359 +0000 UTC m=+1.675119148 container died 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcee86317bf3de8502311e990474beaafc3d7e88d91347bebcc2984cc139e280-merged.mount: Deactivated successfully.
Nov 22 05:26:41 compute-0 podman[97578]: 2025-11-22 05:26:41.373714488 +0000 UTC m=+1.733205297 container remove 57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348 (image=quay.io/ceph/ceph:v18, name=friendly_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:26:41 compute-0 systemd[1]: libpod-conmon-57fa7c5dc4840d2ef3aedbfa4d55ed054fad2039d1a01e949b299872dd15e348.scope: Deactivated successfully.
Nov 22 05:26:41 compute-0 sudo[97575]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:41 compute-0 sudo[97654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qthowapywwueucepnbjzvlldgijowqxk ; /usr/bin/python3'
Nov 22 05:26:41 compute-0 sudo[97654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:41 compute-0 python3[97656]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:41 compute-0 podman[97657]: 2025-11-22 05:26:41.721999161 +0000 UTC m=+0.044221749 container create 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:41 compute-0 systemd[1]: Started libpod-conmon-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope.
Nov 22 05:26:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:41 compute-0 podman[97657]: 2025-11-22 05:26:41.787983888 +0000 UTC m=+0.110206236 container init 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:26:41 compute-0 podman[97657]: 2025-11-22 05:26:41.79291754 +0000 UTC m=+0.115139878 container start 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:41 compute-0 podman[97657]: 2025-11-22 05:26:41.700960566 +0000 UTC m=+0.023182924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:41 compute-0 podman[97657]: 2025-11-22 05:26:41.79650004 +0000 UTC m=+0.118722378 container attach 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:26:42 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:42 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4168673444' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 05:26:42 compute-0 ceph-mon[75840]: osdmap e34: 3 total, 3 up, 3 in
Nov 22 05:26:42 compute-0 ceph-mon[75840]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 05:26:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 22 05:26:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 05:26:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 22 05:26:43 compute-0 ceph-mon[75840]: pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 05:26:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 05:26:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 22 05:26:43 compute-0 angry_zhukovsky[97672]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 22 05:26:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 22 05:26:43 compute-0 systemd[1]: libpod-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope: Deactivated successfully.
Nov 22 05:26:43 compute-0 podman[97657]: 2025-11-22 05:26:43.322593447 +0000 UTC m=+1.644815785 container died 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5bd6e23e5feec6eaaf1a1185b70feccfcca95eff12dd1444b1174ddd03e2ae-merged.mount: Deactivated successfully.
Nov 22 05:26:43 compute-0 podman[97657]: 2025-11-22 05:26:43.360535792 +0000 UTC m=+1.682758130 container remove 4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67 (image=quay.io/ceph/ceph:v18, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:26:43 compute-0 systemd[1]: libpod-conmon-4c8d9607857047e840fb8905a59c153082d3bc05e55d03fc1f6ae029526cca67.scope: Deactivated successfully.
Nov 22 05:26:43 compute-0 sudo[97654]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:26:43
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.mgr']
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:26:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:26:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:26:44 compute-0 python3[97785]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:26:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 22 05:26:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 05:26:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 05:26:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 22 05:26:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 22 05:26:44 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 05:26:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2030706947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 05:26:44 compute-0 ceph-mon[75840]: osdmap e35: 3 total, 3 up, 3 in
Nov 22 05:26:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:44 compute-0 python3[97856]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789203.9993408-36522-241325823660178/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:26:45 compute-0 sudo[97956]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvacfewmqayplfiwoqlrlqtxivraazbf ; /usr/bin/python3'
Nov 22 05:26:45 compute-0 sudo[97956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:45 compute-0 python3[97958]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:26:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 22 05:26:45 compute-0 sudo[97956]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 22 05:26:45 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 22 05:26:45 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 05:26:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:45 compute-0 ceph-mon[75840]: pgmap v86: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:45 compute-0 ceph-mon[75840]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 05:26:45 compute-0 ceph-mon[75840]: Cluster is now healthy
Nov 22 05:26:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:45 compute-0 ceph-mon[75840]: osdmap e36: 3 total, 3 up, 3 in
Nov 22 05:26:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:45 compute-0 ceph-mon[75840]: osdmap e37: 3 total, 3 up, 3 in
Nov 22 05:26:45 compute-0 sudo[98031]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfoiwtrlinkmhzfvwpiguaemmuyasopp ; /usr/bin/python3'
Nov 22 05:26:45 compute-0 sudo[98031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:45 compute-0 python3[98033]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789204.9594376-36536-114795781789323/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=6cbec36551ab2122646d939859c1d167146d375a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:26:45 compute-0 sudo[98031]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:45 compute-0 sudo[98081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjytlhvmidnhciqhumxgssdesryjwchm ; /usr/bin/python3'
Nov 22 05:26:45 compute-0 sudo[98081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:46 compute-0 python3[98083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.119234518 +0000 UTC m=+0.056522235 container create ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:46 compute-0 systemd[1]: Started libpod-conmon-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope.
Nov 22 05:26:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.182250468 +0000 UTC m=+0.119538285 container init ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.189977863 +0000 UTC m=+0.127265580 container start ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.19339555 +0000 UTC m=+0.130683307 container attach ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.099332829 +0000 UTC m=+0.036620576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 22 05:26:46 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 05:26:46 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=9.691080093s) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active pruub 68.570899963s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:46 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=9.691080093s) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown pruub 68.570899963s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:26:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 05:26:46 compute-0 heuristic_napier[98099]: 
Nov 22 05:26:46 compute-0 heuristic_napier[98099]: [global]
Nov 22 05:26:46 compute-0 heuristic_napier[98099]:         fsid = 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:26:46 compute-0 heuristic_napier[98099]:         mon_host = 192.168.122.100
Nov 22 05:26:46 compute-0 systemd[1]: libpod-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope: Deactivated successfully.
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.698382115 +0000 UTC m=+0.635669872 container died ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-f93b5dd9ceb2acffac92d03f012043e95c530537889a7e73eb24946967ebb266-merged.mount: Deactivated successfully.
Nov 22 05:26:46 compute-0 podman[98084]: 2025-11-22 05:26:46.754085701 +0000 UTC m=+0.691373428 container remove ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718 (image=quay.io/ceph/ceph:v18, name=heuristic_napier, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:26:46 compute-0 sudo[98124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:46 compute-0 sudo[98124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:46 compute-0 sudo[98124]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:46 compute-0 systemd[1]: libpod-conmon-ce0f5bb07e66805bca6134cead9f4fa8f6a325a514702d6ac2f27ba7670f5718.scope: Deactivated successfully.
Nov 22 05:26:46 compute-0 sudo[98081]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:46 compute-0 sudo[98159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:46 compute-0 sudo[98159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:46 compute-0 sudo[98159]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:46 compute-0 sudo[98184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:46 compute-0 sudo[98184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:46 compute-0 sudo[98184]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:46 compute-0 sudo[98232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkcnbdmexfeqygbfyhufnnlnsortorkc ; /usr/bin/python3'
Nov 22 05:26:46 compute-0 sudo[98232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:46 compute-0 sudo[98233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:26:46 compute-0 sudo[98233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:47 compute-0 python3[98240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.180409962 +0000 UTC m=+0.051161464 container create c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:26:47 compute-0 systemd[1]: Started libpod-conmon-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope.
Nov 22 05:26:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.16612888 +0000 UTC m=+0.036880382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.27255837 +0000 UTC m=+0.143309922 container init c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.280588521 +0000 UTC m=+0.151340033 container start c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.283777433 +0000 UTC m=+0.154528945 container attach c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 22 05:26:47 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-mon[75840]: pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:47 compute-0 ceph-mon[75840]: osdmap e38: 3 total, 3 up, 3 in
Nov 22 05:26:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 05:26:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/715945239' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.0( empty local-lis/les=38/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [1] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:47 compute-0 podman[98347]: 2025-11-22 05:26:47.62915951 +0000 UTC m=+0.064881393 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:26:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v92: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:47 compute-0 podman[98347]: 2025-11-22 05:26:47.754955316 +0000 UTC m=+0.190677119 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 22 05:26:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1464073565' entity='client.admin' 
Nov 22 05:26:47 compute-0 frosty_nobel[98300]: set ssl_option
Nov 22 05:26:47 compute-0 systemd[1]: libpod-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope: Deactivated successfully.
Nov 22 05:26:47 compute-0 podman[98260]: 2025-11-22 05:26:47.954458574 +0000 UTC m=+0.825210076 container died c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cff7e92a1b981f661bfb2890677b1c1bf271cb7e326655ed2e75e85f3ebcf3c-merged.mount: Deactivated successfully.
Nov 22 05:26:48 compute-0 podman[98260]: 2025-11-22 05:26:48.00572865 +0000 UTC m=+0.876480162 container remove c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7 (image=quay.io/ceph/ceph:v18, name=frosty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:48 compute-0 systemd[1]: libpod-conmon-c1b247a55224dc0a9a2743aae3106322f121239f2dd211a00a79e52efdda4de7.scope: Deactivated successfully.
Nov 22 05:26:48 compute-0 sudo[98232]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:48 compute-0 sudo[98518]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxsqqqcnsqcesbzpulpzrzspqfjxdnge ; /usr/bin/python3'
Nov 22 05:26:48 compute-0 sudo[98518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:48 compute-0 sudo[98233]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 94f6dbbf-31fa-4715-a2a7-52459d366468 does not exist
Nov 22 05:26:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 89e1cd2a-5951-4e74-8294-8fae06c4c8ec does not exist
Nov 22 05:26:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ee54b519-fbba-401e-8162-75efe9bd0aac does not exist
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:48 compute-0 sudo[98528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:48 compute-0 sudo[98528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:48 compute-0 sudo[98528]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:48 compute-0 python3[98525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 22 05:26:48 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 22 05:26:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:26:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: osdmap e39: 3 total, 3 up, 3 in
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1464073565' entity='client.admin' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:48 compute-0 ceph-mon[75840]: osdmap e40: 3 total, 3 up, 3 in
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=11.716521263s) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 66.521041870s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 38 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38 pruub=13.638353348s) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active pruub 68.442977905s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=11.716521263s) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown pruub 66.521041870s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 sudo[98553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38 pruub=13.638353348s) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown pruub 68.442977905s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.f( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.10( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.12( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.11( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.13( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1f( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.15( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.14( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.16( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.19( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1a( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.17( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.18( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1b( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1c( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1d( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1e( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.b( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.c( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.d( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.e( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.3( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.4( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.5( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.6( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.7( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.8( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.9( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.a( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.1( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 40 pg[2.2( empty local-lis/les=18/19 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 sudo[98553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:48 compute-0 sudo[98553]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:48 compute-0 podman[98560]: 2025-11-22 05:26:48.385700016 +0000 UTC m=+0.044077984 container create 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:48 compute-0 systemd[1]: Started libpod-conmon-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope.
Nov 22 05:26:48 compute-0 sudo[98589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:48 compute-0 sudo[98589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:48 compute-0 sudo[98589]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:48 compute-0 podman[98560]: 2025-11-22 05:26:48.367748811 +0000 UTC m=+0.026126799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:48 compute-0 podman[98560]: 2025-11-22 05:26:48.465011464 +0000 UTC m=+0.123389452 container init 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:26:48 compute-0 podman[98560]: 2025-11-22 05:26:48.471144992 +0000 UTC m=+0.129522990 container start 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:26:48 compute-0 sudo[98621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:26:48 compute-0 sudo[98621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:48 compute-0 podman[98560]: 2025-11-22 05:26:48.475138083 +0000 UTC m=+0.133516051 container attach 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:48 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=9.470194817s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 76.047958374s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:48 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 40 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=9.470194817s) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown pruub 76.047958374s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:48 compute-0 ceph-mgr[76134]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.780029017 +0000 UTC m=+0.038167652 container create c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:48 compute-0 systemd[1]: Started libpod-conmon-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope.
Nov 22 05:26:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.845067082 +0000 UTC m=+0.103205737 container init c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.850024395 +0000 UTC m=+0.108163040 container start c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:48 compute-0 elastic_hamilton[98722]: 167 167
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.853198286 +0000 UTC m=+0.111336941 container attach c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:48 compute-0 systemd[1]: libpod-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope: Deactivated successfully.
Nov 22 05:26:48 compute-0 conmon[98722]: conmon c0d1418aca576a486721 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope/container/memory.events
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.855877807 +0000 UTC m=+0.114016462 container died c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.762069222 +0000 UTC m=+0.020207877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-24f47fe30a3bfec703caa6a161c5113f3eff7f43dd33ed327eff2e64cf0402a1-merged.mount: Deactivated successfully.
Nov 22 05:26:48 compute-0 podman[98687]: 2025-11-22 05:26:48.890127398 +0000 UTC m=+0.148266033 container remove c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hamilton, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:26:48 compute-0 systemd[1]: libpod-conmon-c0d1418aca576a486721699da32a0bae43818c8d7d7a5e98d0b0459198b30ce3.scope: Deactivated successfully.
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 05:26:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 05:26:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:49 compute-0 heuristic_gagarin[98617]: Scheduled rgw.rgw update...
Nov 22 05:26:49 compute-0 systemd[1]: libpod-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope: Deactivated successfully.
Nov 22 05:26:49 compute-0 podman[98560]: 2025-11-22 05:26:49.04982924 +0000 UTC m=+0.708207238 container died 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1775bfe90e516388a2abc7754c5868e915b883a953fb40f03a7f5628961d7590-merged.mount: Deactivated successfully.
Nov 22 05:26:49 compute-0 podman[98560]: 2025-11-22 05:26:49.112519862 +0000 UTC m=+0.770897870 container remove 1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9 (image=quay.io/ceph/ceph:v18, name=heuristic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:49 compute-0 systemd[1]: libpod-conmon-1d8b012a8691e01545d1d823aa0b25d8618379044e5d9e4f504e9553b3a89de9.scope: Deactivated successfully.
Nov 22 05:26:49 compute-0 podman[98748]: 2025-11-22 05:26:49.144421962 +0000 UTC m=+0.058790957 container create a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:26:49 compute-0 sudo[98518]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:49 compute-0 systemd[1]: Started libpod-conmon-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope.
Nov 22 05:26:49 compute-0 podman[98748]: 2025-11-22 05:26:49.112760959 +0000 UTC m=+0.027130004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:49 compute-0 podman[98748]: 2025-11-22 05:26:49.244179801 +0000 UTC m=+0.158548856 container init a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:26:49 compute-0 podman[98748]: 2025-11-22 05:26:49.252000597 +0000 UTC m=+0.166369592 container start a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:49 compute-0 podman[98748]: 2025-11-22 05:26:49.255706051 +0000 UTC m=+0.170075046 container attach a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 22 05:26:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 22 05:26:49 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 49c4bc42-23cd-4ec0-84cf-b1f877a6a039 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 43235f00-d76b-41ad-a422-a6b30308113f (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event f1dff32a-54af-40a2-bd07-f3d878b142d3 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 9ecf5ac5-058c-44f9-a159-ae9513330978 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event a318c3d6-fe2a-412e-b7f4-daa6f1e4413a (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event db1b119e-34db-4b05-adcf-b3739a6d94a0 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-mon[75840]: pgmap v92: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:26:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:26:49 compute-0 ceph-mon[75840]: osdmap e41: 3 total, 3 up, 3 in
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=38/41 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.0( empty local-lis/les=40/41 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=18/18 les/c/f=19/19/0 sis=38) [2] r=0 lpr=38 pi=[18,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 41 pg[5.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [2] r=0 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.0( empty local-lis/les=40/41 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [0] r=0 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 22 05:26:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 22 05:26:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 124 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:50 compute-0 python3[98860]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:26:50 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 22 05:26:50 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 22 05:26:50 compute-0 gallant_wright[98777]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:26:50 compute-0 gallant_wright[98777]: --> relative data size: 1.0
Nov 22 05:26:50 compute-0 gallant_wright[98777]: --> All data devices are unavailable
Nov 22 05:26:50 compute-0 systemd[1]: libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Deactivated successfully.
Nov 22 05:26:50 compute-0 systemd[1]: libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Consumed 1.010s CPU time.
Nov 22 05:26:50 compute-0 conmon[98777]: conmon a8e15ff78f17ff8cd97d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope/container/memory.events
Nov 22 05:26:50 compute-0 podman[98748]: 2025-11-22 05:26:50.322732568 +0000 UTC m=+1.237101523 container died a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa4977eff96ca1b203604464a6cc6f557714ac4ac60d53ed4878286e3546974-merged.mount: Deactivated successfully.
Nov 22 05:26:50 compute-0 systemd[77455]: Starting Mark boot as successful...
Nov 22 05:26:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 22 05:26:50 compute-0 systemd[77455]: Finished Mark boot as successful.
Nov 22 05:26:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 22 05:26:50 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 22 05:26:50 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=11.733865738s) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active pruub 80.106666565s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:50 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=11.733865738s) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown pruub 80.106666565s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:50 compute-0 podman[98748]: 2025-11-22 05:26:50.386823152 +0000 UTC m=+1.301192107 container remove a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:26:50 compute-0 ceph-mon[75840]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:50 compute-0 ceph-mon[75840]: Saving service rgw.rgw spec with placement compute-0
Nov 22 05:26:50 compute-0 ceph-mon[75840]: pgmap v95: 131 pgs: 124 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:50 compute-0 systemd[1]: libpod-conmon-a8e15ff78f17ff8cd97df10b585a102bb7713eabd63b536ee4bfdb510fb7912e.scope: Deactivated successfully.
Nov 22 05:26:50 compute-0 sudo[98621]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:50 compute-0 python3[98952]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789209.822787-36577-201479021486685/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:26:50 compute-0 sudo[98965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:50 compute-0 sudo[98965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:50 compute-0 sudo[98965]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:50 compute-0 sudo[98990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:50 compute-0 sudo[98990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:50 compute-0 sudo[98990]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:50 compute-0 sudo[99035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:50 compute-0 sudo[99035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:50 compute-0 sudo[99035]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:50 compute-0 sudo[99064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:26:50 compute-0 sudo[99064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:50 compute-0 sudo[99113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smflsfbndngxhluftniyatvvxpzrcsxv ; /usr/bin/python3'
Nov 22 05:26:50 compute-0 sudo[99113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:50 compute-0 python3[99124]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.018613346 +0000 UTC m=+0.035110442 container create 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:51 compute-0 systemd[1]: Started libpod-conmon-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope.
Nov 22 05:26:51 compute-0 podman[99163]: 2025-11-22 05:26:51.062522596 +0000 UTC m=+0.049163579 container create 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:26:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.003559807 +0000 UTC m=+0.020056943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:51 compute-0 systemd[1]: Started libpod-conmon-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope.
Nov 22 05:26:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.11281795 +0000 UTC m=+0.129315126 container init 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.118678732 +0000 UTC m=+0.135175818 container start 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:51 compute-0 affectionate_goldwasser[99183]: 167 167
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.122946718 +0000 UTC m=+0.139443884 container attach 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.123383109 +0000 UTC m=+0.139880195 container died 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:26:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 22 05:26:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:51 compute-0 systemd[1]: libpod-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope: Deactivated successfully.
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 podman[99163]: 2025-11-22 05:26:51.045554974 +0000 UTC m=+0.032195987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-671b7fe88a110ed4dc2cd0d9d80fac0f9607796752c67294f0e5d09abe01acf7-merged.mount: Deactivated successfully.
Nov 22 05:26:51 compute-0 podman[99163]: 2025-11-22 05:26:51.150690194 +0000 UTC m=+0.137331197 container init 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:51 compute-0 podman[99163]: 2025-11-22 05:26:51.158460399 +0000 UTC m=+0.145101422 container start 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:51 compute-0 podman[99156]: 2025-11-22 05:26:51.168610318 +0000 UTC m=+0.185107394 container remove 1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:26:51 compute-0 systemd[1]: libpod-conmon-1166e9f0a78cee12f79f13aa95466d4a84f48286aef0cdf36a0122c6aabcbfb3.scope: Deactivated successfully.
Nov 22 05:26:51 compute-0 podman[99163]: 2025-11-22 05:26:51.181670392 +0000 UTC m=+0.168311375 container attach 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 podman[99214]: 2025-11-22 05:26:51.378715605 +0000 UTC m=+0.055414071 container create 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-mon[75840]: 3.1 scrub starts
Nov 22 05:26:51 compute-0 ceph-mon[75840]: 3.1 scrub ok
Nov 22 05:26:51 compute-0 ceph-mon[75840]: 5.1 scrub starts
Nov 22 05:26:51 compute-0 ceph-mon[75840]: 5.1 scrub ok
Nov 22 05:26:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:26:51 compute-0 ceph-mon[75840]: osdmap e42: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 ceph-mon[75840]: osdmap e43: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.0( empty local-lis/les=42/43 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [0] r=0 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 systemd[1]: Started libpod-conmon-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope.
Nov 22 05:26:51 compute-0 podman[99214]: 2025-11-22 05:26:51.356575366 +0000 UTC m=+0.033273862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:51 compute-0 podman[99214]: 2025-11-22 05:26:51.490549556 +0000 UTC m=+0.167248032 container init 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:51 compute-0 podman[99214]: 2025-11-22 05:26:51.499814705 +0000 UTC m=+0.176513151 container start 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:51 compute-0 podman[99214]: 2025-11-22 05:26:51.503101749 +0000 UTC m=+0.179800205 container attach 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42 pruub=12.465258598s) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active pruub 76.696258545s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.0( empty local-lis/les=28/29 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42 pruub=12.465258598s) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown pruub 76.696258545s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.10( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.d( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.14( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.12( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.16( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.19( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.17( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1e( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1d( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.7( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 43 pg[7.b( empty local-lis/les=28/29 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 05:26:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0[75836]: 2025-11-22T05:26:51.739+0000 7f1d8fe96640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e2 new map
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T05:26:51.740040+0000
                                           modified        2025-11-22T05:26:51.740073+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.12( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.17( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.16( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.10( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1d( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.14( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.7( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=42/44 n=0 ec=28/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.d( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 44 pg[7.19( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=28/28 les/c/f=29/29/0 sis=42) [1] r=0 lpr=42 pi=[28,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:51 compute-0 systemd[1]: libpod-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope: Deactivated successfully.
Nov 22 05:26:51 compute-0 podman[99256]: 2025-11-22 05:26:51.851299319 +0000 UTC m=+0.042275994 container died 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8425aa3929f4ce607b31de3c9d48e0bbfe2f541e223868a00ad0039a430a15cd-merged.mount: Deactivated successfully.
Nov 22 05:26:51 compute-0 podman[99256]: 2025-11-22 05:26:51.889971921 +0000 UTC m=+0.080948566 container remove 5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4 (image=quay.io/ceph/ceph:v18, name=jolly_shirley, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:51 compute-0 systemd[1]: libpod-conmon-5004268f8c0dfb4baeeee1dc14f4843229211121f4264da428f027ba925c87d4.scope: Deactivated successfully.
Nov 22 05:26:51 compute-0 sudo[99113]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:52 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 22 05:26:52 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 22 05:26:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:52 compute-0 sudo[99292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huonquigexftegbhvjvtynmzrobpvepr ; /usr/bin/python3'
Nov 22 05:26:52 compute-0 sudo[99292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:52 compute-0 python3[99294]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.287592606 +0000 UTC m=+0.044446714 container create b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:26:52 compute-0 distracted_neumann[99230]: {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     "0": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "devices": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "/dev/loop3"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             ],
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_name": "ceph_lv0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_size": "21470642176",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "name": "ceph_lv0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "tags": {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.crush_device_class": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.encrypted": "0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_id": "0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.vdo": "0"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             },
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "vg_name": "ceph_vg0"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         }
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     ],
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     "1": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "devices": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "/dev/loop4"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             ],
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_name": "ceph_lv1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_size": "21470642176",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "name": "ceph_lv1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "tags": {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.crush_device_class": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.encrypted": "0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_id": "1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.vdo": "0"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             },
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "vg_name": "ceph_vg1"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         }
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     ],
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     "2": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "devices": [
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "/dev/loop5"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             ],
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_name": "ceph_lv2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_size": "21470642176",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "name": "ceph_lv2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "tags": {
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.cluster_name": "ceph",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.crush_device_class": "",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.encrypted": "0",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osd_id": "2",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:                 "ceph.vdo": "0"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             },
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "type": "block",
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:             "vg_name": "ceph_vg2"
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:         }
Nov 22 05:26:52 compute-0 distracted_neumann[99230]:     ]
Nov 22 05:26:52 compute-0 distracted_neumann[99230]: }
Nov 22 05:26:52 compute-0 podman[99214]: 2025-11-22 05:26:52.317803267 +0000 UTC m=+0.994501713 container died 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:26:52 compute-0 systemd[1]: Started libpod-conmon-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope.
Nov 22 05:26:52 compute-0 systemd[1]: libpod-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope: Deactivated successfully.
Nov 22 05:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6384f44582b8f359f2c1ed6fc9c6ef9092b5dc632e88adde832cac51dd64a2d-merged.mount: Deactivated successfully.
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.26869 +0000 UTC m=+0.025544108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:52 compute-0 podman[99214]: 2025-11-22 05:26:52.373515852 +0000 UTC m=+1.050214318 container remove 7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.378127387 +0000 UTC m=+0.134981495 container init b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.385546854 +0000 UTC m=+0.142400952 container start b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.389636726 +0000 UTC m=+0.146490834 container attach b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:52 compute-0 systemd[1]: libpod-conmon-7d00efa30f01a25beebe07234d8ecb5899052e7abb33f4dfb035c01949382680.scope: Deactivated successfully.
Nov 22 05:26:52 compute-0 ceph-mon[75840]: 5.2 scrub starts
Nov 22 05:26:52 compute-0 ceph-mon[75840]: 5.2 scrub ok
Nov 22 05:26:52 compute-0 ceph-mon[75840]: pgmap v98: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 05:26:52 compute-0 ceph-mon[75840]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 05:26:52 compute-0 ceph-mon[75840]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 05:26:52 compute-0 ceph-mon[75840]: osdmap e44: 3 total, 3 up, 3 in
Nov 22 05:26:52 compute-0 ceph-mon[75840]: fsmap cephfs:0
Nov 22 05:26:52 compute-0 ceph-mon[75840]: Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:52 compute-0 ceph-mon[75840]: 4.1 scrub starts
Nov 22 05:26:52 compute-0 ceph-mon[75840]: 4.1 scrub ok
Nov 22 05:26:52 compute-0 sudo[99064]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:52 compute-0 sudo[99329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:52 compute-0 sudo[99329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:52 compute-0 sudo[99329]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:52 compute-0 sudo[99354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:52 compute-0 sudo[99354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:52 compute-0 sudo[99354]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:52 compute-0 sudo[99379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:52 compute-0 sudo[99379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:52 compute-0 sudo[99379]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:52 compute-0 sudo[99404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:26:52 compute-0 sudo[99404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:52 compute-0 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:52 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 05:26:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:52 compute-0 keen_dijkstra[99316]: Scheduled mds.cephfs update...
Nov 22 05:26:52 compute-0 systemd[1]: libpod-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope: Deactivated successfully.
Nov 22 05:26:52 compute-0 conmon[99316]: conmon b82f4dbe4310ff7a2554 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope/container/memory.events
Nov 22 05:26:52 compute-0 podman[99297]: 2025-11-22 05:26:52.988510678 +0000 UTC m=+0.745364776 container died b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-daffe346d22e3d19b9107abba7e01987ffba0c9cdeb2f404780055a3a99248a0-merged.mount: Deactivated successfully.
Nov 22 05:26:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 22 05:26:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.068927741 +0000 UTC m=+0.074089381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:53 compute-0 podman[99297]: 2025-11-22 05:26:53.165651312 +0000 UTC m=+0.922505450 container remove b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1 (image=quay.io/ceph/ceph:v18, name=keen_dijkstra, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:26:53 compute-0 sudo[99292]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.247662691 +0000 UTC m=+0.252824301 container create e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:26:53 compute-0 systemd[1]: libpod-conmon-b82f4dbe4310ff7a255494376be09f6cd8294ff22c8f21c26f2498f3a1e6a6b1.scope: Deactivated successfully.
Nov 22 05:26:53 compute-0 systemd[1]: Started libpod-conmon-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope.
Nov 22 05:26:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.318832366 +0000 UTC m=+0.323994016 container init e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.3257242 +0000 UTC m=+0.330885770 container start e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.329153588 +0000 UTC m=+0.334315188 container attach e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:26:53 compute-0 elastic_yalow[99516]: 167 167
Nov 22 05:26:53 compute-0 systemd[1]: libpod-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope: Deactivated successfully.
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.332714388 +0000 UTC m=+0.337875968 container died e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:26:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-39bc85c58b1979ffc844f60eb9f6af826e3dd56397c6bce89f3d5eaa46606a23-merged.mount: Deactivated successfully.
Nov 22 05:26:53 compute-0 podman[99490]: 2025-11-22 05:26:53.366116791 +0000 UTC m=+0.371278361 container remove e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_yalow, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:53 compute-0 systemd[1]: libpod-conmon-e1b37a9f475bffd753393c201f4ff142d868ae531ebbd63367d7df582eebd7ae.scope: Deactivated successfully.
Nov 22 05:26:53 compute-0 podman[99539]: 2025-11-22 05:26:53.514125468 +0000 UTC m=+0.052841102 container create c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:26:53 compute-0 systemd[1]: Started libpod-conmon-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope.
Nov 22 05:26:53 compute-0 podman[99539]: 2025-11-22 05:26:53.485445212 +0000 UTC m=+0.024160896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:53 compute-0 podman[99539]: 2025-11-22 05:26:53.616436755 +0000 UTC m=+0.155152409 container init c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:26:53 compute-0 podman[99539]: 2025-11-22 05:26:53.633370676 +0000 UTC m=+0.172086320 container start c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:26:53 compute-0 podman[99539]: 2025-11-22 05:26:53.637824867 +0000 UTC m=+0.176540521 container attach c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:26:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v100: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:53 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 9 completed events
Nov 22 05:26:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:26:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:53 compute-0 sudo[99635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rriunyfkhgmeunitsshpblozxqneyvim ; /usr/bin/python3'
Nov 22 05:26:53 compute-0 sudo[99635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:53 compute-0 python3[99637]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 05:26:53 compute-0 sudo[99635]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:53 compute-0 ceph-mon[75840]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 05:26:53 compute-0 ceph-mon[75840]: Saving service mds.cephfs spec with placement compute-0
Nov 22 05:26:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:53 compute-0 ceph-mon[75840]: 4.2 scrub starts
Nov 22 05:26:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:54 compute-0 sudo[99708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlwtgmjjvxsemtxwbuueklqsxexnqwkl ; /usr/bin/python3'
Nov 22 05:26:54 compute-0 sudo[99708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:54 compute-0 python3[99710]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789213.6421154-36607-184308925635291/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=18cfea5729768871b1211ef73b57421c54974f8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:26:54 compute-0 sudo[99708]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:54 compute-0 friendly_hoover[99555]: {
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_id": 1,
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "type": "bluestore"
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     },
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_id": 2,
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "type": "bluestore"
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     },
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_id": 0,
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:         "type": "bluestore"
Nov 22 05:26:54 compute-0 friendly_hoover[99555]:     }
Nov 22 05:26:54 compute-0 friendly_hoover[99555]: }
Nov 22 05:26:54 compute-0 systemd[1]: libpod-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Deactivated successfully.
Nov 22 05:26:54 compute-0 podman[99539]: 2025-11-22 05:26:54.650744984 +0000 UTC m=+1.189460628 container died c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:26:54 compute-0 systemd[1]: libpod-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Consumed 1.030s CPU time.
Nov 22 05:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d7cb9e3592a3ea551a1ab0bc2ea98854b9039f2d62c42cd486bc97c82e517e-merged.mount: Deactivated successfully.
Nov 22 05:26:54 compute-0 sudo[99793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uumkkguiggikizlhofyximvqcpnjijhl ; /usr/bin/python3'
Nov 22 05:26:54 compute-0 sudo[99793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:54 compute-0 podman[99539]: 2025-11-22 05:26:54.73352158 +0000 UTC m=+1.272237224 container remove c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hoover, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:54 compute-0 systemd[1]: libpod-conmon-c75f9eb6ddca81c4f93908d3b03766d4fa5a6c82a3687c928af45152a6d99426.scope: Deactivated successfully.
Nov 22 05:26:54 compute-0 sudo[99404]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:54 compute-0 sudo[99803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:54 compute-0 sudo[99803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:54 compute-0 python3[99802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:54 compute-0 sudo[99803]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:54 compute-0 sudo[99829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:26:54 compute-0 sudo[99829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:54 compute-0 sudo[99829]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:54 compute-0 podman[99828]: 2025-11-22 05:26:54.938568082 +0000 UTC m=+0.046561370 container create d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:26:54 compute-0 systemd[1]: Started libpod-conmon-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope.
Nov 22 05:26:54 compute-0 ceph-mon[75840]: 4.2 scrub ok
Nov 22 05:26:54 compute-0 ceph-mon[75840]: pgmap v100: 193 pgs: 62 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:55 compute-0 sudo[99866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:54.92028084 +0000 UTC m=+0.028274158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:55 compute-0 sudo[99866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:55 compute-0 sudo[99866]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:55.031194981 +0000 UTC m=+0.139188279 container init d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:55.038721121 +0000 UTC m=+0.146714429 container start d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:55.042743622 +0000 UTC m=+0.150736950 container attach d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:55 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 22 05:26:55 compute-0 sudo[99897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:55 compute-0 sudo[99897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:55 compute-0 sudo[99897]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:55 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 22 05:26:55 compute-0 sudo[99922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:55 compute-0 sudo[99922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:55 compute-0 sudo[99922]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:55 compute-0 sudo[99947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:26:55 compute-0 sudo[99947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 05:26:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 systemd[1]: libpod-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope: Deactivated successfully.
Nov 22 05:26:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:26:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:55.679380904 +0000 UTC m=+0.787374212 container died d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-be10a7d4eec497e1ecef3382468a37d48925fa3f86bbd4183656b7ab790e2e92-merged.mount: Deactivated successfully.
Nov 22 05:26:55 compute-0 podman[99828]: 2025-11-22 05:26:55.74261152 +0000 UTC m=+0.850604838 container remove d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0 (image=quay.io/ceph/ceph:v18, name=dreamy_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:55 compute-0 systemd[1]: libpod-conmon-d482b45c516ec8877d397b51883dcfd08235268dc4316dc93923cd610cc35fe0.scope: Deactivated successfully.
Nov 22 05:26:55 compute-0 sudo[99793]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:55 compute-0 podman[100078]: 2025-11-22 05:26:55.900115521 +0000 UTC m=+0.072961526 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/173646705' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:26:56 compute-0 podman[100078]: 2025-11-22 05:26:56.023088994 +0000 UTC m=+0.195935029 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:56 compute-0 sudo[100207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndqxrtnnxaevyngqpvxupvkzvctvtdq ; /usr/bin/python3'
Nov 22 05:26:56 compute-0 sudo[100207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:56 compute-0 sudo[99947]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1cb8d91d-e1f1-454b-a180-4a7e6a632fd3 does not exist
Nov 22 05:26:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 08b29d6a-d436-408e-9fa3-f6f30e63c7d7 does not exist
Nov 22 05:26:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 70d5aba2-edf7-4089-8c04-6e19875e0840 does not exist
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:56 compute-0 python3[100211]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 22 05:26:56 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.717928886s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.376487732s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.717873573s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.376487732s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724143982s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382774353s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.704218864s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362899780s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724062920s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382781982s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724061012s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382774353s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.724007607s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382781982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.704098701s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362899780s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703700066s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362640381s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723844528s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.382820129s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703682899s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362640381s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723811150s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.382820129s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703653336s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362701416s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703567505s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362632751s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703536034s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362625122s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703519821s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362625122s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703532219s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362632751s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703607559s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362701416s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703354836s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362663269s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703322411s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362663269s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723654747s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383041382s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723637581s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383041382s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723557472s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383026123s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723571777s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383079529s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703066826s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362594604s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723556519s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383079529s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.703038216s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362594604s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.723469734s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383026123s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702985764s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362594604s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702969551s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362594604s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722650528s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383110046s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722628593s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383117676s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.702134132s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362579346s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722602844s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383110046s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722570419s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383117676s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701698303s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362266541s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701620102s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362266541s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701550484s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.362266541s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722375870s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383125305s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722352028s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383125305s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701520920s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362266541s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.701845169s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.362579346s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722254753s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383171082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722234726s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383171082s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700984001s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361976624s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700995445s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361999512s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722096443s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383171082s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700943947s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361999512s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.722075462s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383171082s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700878143s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361976624s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700733185s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361968994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700714111s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361968994s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700569153s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361900330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700536728s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361900330s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721822739s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383224487s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700509071s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361907959s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721802711s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383224487s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700466156s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361907959s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721702576s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383247375s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700295448s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361862183s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700276375s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361862183s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721673965s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383247375s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700249672s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361892700s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721422195s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383094788s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721402168s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383094788s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700220108s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361892700s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700215340s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361938477s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721586227s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383354187s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700030327s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 83.361839294s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721556664s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383354187s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700012207s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361839294s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721508026s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383384705s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.700185776s) [2] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.361938477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721473694s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383384705s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721433640s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383392334s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721416473s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383392334s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721368790s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 85.383415222s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.721340179s) [1] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 85.383415222s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.7( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.12( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[4.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078650475s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309280396s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677594185s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908340454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677576065s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908348083s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677559853s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908340454s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677550316s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908348083s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078399658s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309295654s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078375816s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309295654s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078358650s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309280396s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078365326s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.309356689s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.078348160s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.309356689s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677197456s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908218384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677384377s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908439636s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677154541s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908218384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677034378s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908187866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677010536s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908187866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.677263260s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908439636s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085411072s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316711426s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676651001s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907966614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085359573s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316711426s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676495552s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907890320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085236549s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316635132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676591873s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907966614s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676454544s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907890320s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085198402s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316635132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676443100s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908004761s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676421165s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908004761s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085186958s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316780090s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676184654s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907791138s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085124969s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316749573s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085160255s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316780090s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.676139832s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907791138s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085083961s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316780090s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085094452s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316749573s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085066795s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316780090s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085054398s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316856384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085033417s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316856384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.14( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.085009575s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316909790s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675668716s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907585144s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.15( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675601959s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907569885s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084976196s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316909790s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675622940s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907585144s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675580025s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907569885s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.11( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675228119s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907424927s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084730148s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316947937s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675197601s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907424927s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.11( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084705353s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316947937s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084603310s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316955566s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675089836s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907463074s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084586143s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316955566s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675065041s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907463074s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[4.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688663483s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819038391s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688179970s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818595886s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688629150s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819038391s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688137054s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818595886s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084455490s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316970825s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084383965s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316917419s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084434509s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316970825s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674995422s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907539368s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688069344s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818649292s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084353447s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316917419s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674961090s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907539368s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.688043594s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818649292s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084322929s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.316963196s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084305763s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.316963196s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688065529s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818786621s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675435066s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.908126831s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.675417900s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.908126831s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.688025475s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818786621s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674637794s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907394409s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084237099s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317008972s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687857628s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818672180s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674612045s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907394409s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687729836s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818710327s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687702179s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818710327s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687690735s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818740845s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687647820s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818740845s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687611580s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818794250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687568665s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818794250s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687413216s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818771362s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687615395s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819000244s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687337875s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818771362s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.687469482s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819000244s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.18( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.687012672s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818672180s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1c( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.11( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.16( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.15( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.17( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.15( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.12( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084217072s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317008972s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084184647s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317031860s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084166527s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317031860s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.a( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685537338s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818832397s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685498238s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818832397s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.685382843s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818870544s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.685358047s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818870544s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685474396s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819129944s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685450554s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819129944s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.685056686s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818908691s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683683395s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.818885803s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683631897s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818885803s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.683634758s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819198608s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.683594704s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819198608s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674372673s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907310486s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683130264s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819015503s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683092117s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819015503s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084068298s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317024231s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674351692s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907310486s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682958603s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819099426s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674348831s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907318115s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.084048271s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317024231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674324036s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907318115s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083875656s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317031860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682925224s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819099426s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674061775s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907241821s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083850861s) [2] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317031860s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.674036026s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907241821s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083807945s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 80.317039490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.684010506s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820343018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=42/44 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45 pruub=11.083779335s) [0] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.317039490s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.673927307s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907218933s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.683967590s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820343018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.1b( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.f( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.11( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.15( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.12( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.3( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.13( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682831764s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819534302s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.6( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.16( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682792664s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819534302s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.9( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.a( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.1f( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1b( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682342529s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819274902s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.673910141s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907218933s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682275772s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819274902s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682429314s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819602966s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.684988976s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.818908691s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682388306s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819602966s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682341576s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819641113s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.19( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682229042s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819641113s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682801247s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820190430s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682647705s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820190430s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682138443s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819725037s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682074547s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819725037s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682081223s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819763184s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.682004929s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819763184s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682127953s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819679260s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.682007790s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819770813s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681927681s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819770813s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681835175s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819679260s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681981087s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820068359s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681766510s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819778442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681697845s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819839478s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681667328s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819862366s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681603432s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819778442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681883812s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820068359s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681626320s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819862366s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.13( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681440353s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819862366s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681529999s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819885254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681361198s) [0] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819839478s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681388855s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819862366s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681385994s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819885254s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.11( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681068420s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.819915771s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681196213s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820121765s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681037903s) [1] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.819915771s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681141853s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820121765s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680937767s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820159912s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681224823s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820449829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.11( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681012154s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820243835s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.681147575s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820281982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.681201935s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820449829s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680883408s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820159912s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680886269s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820198059s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680997849s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680974007s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820343018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680857658s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820198059s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=38/41 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45 pruub=8.680956841s) [0] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820343018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680778503s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active pruub 71.820281982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680764198s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820243835s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45 pruub=8.680745125s) [1] r=-1 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.820281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.664406776s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active pruub 83.907714844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45 pruub=14.664361954s) [2] r=-1 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.907714844s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.7( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.8( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.c( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.1d( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[7.1a( empty local-lis/les=0/0 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.1e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.e( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.f( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.7( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.15( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[5.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1d( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[2.1f( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.7( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 45 pg[3.1f( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.3( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.6( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.9( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 45 pg[5.18( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 45 pg[3.5( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:26:56 compute-0 sudo[100226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:56 compute-0 sudo[100226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:56 compute-0 podman[100231]: 2025-11-22 05:26:56.744101709 +0000 UTC m=+0.065589370 container create aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:26:56 compute-0 sudo[100226]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:56 compute-0 systemd[1]: Started libpod-conmon-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope.
Nov 22 05:26:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:56 compute-0 sudo[100268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:56 compute-0 sudo[100268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:56 compute-0 sudo[100268]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:56 compute-0 podman[100231]: 2025-11-22 05:26:56.72907716 +0000 UTC m=+0.050564851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:56 compute-0 podman[100231]: 2025-11-22 05:26:56.82842747 +0000 UTC m=+0.149915161 container init aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:56 compute-0 podman[100231]: 2025-11-22 05:26:56.8346059 +0000 UTC m=+0.156093561 container start aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:56 compute-0 podman[100231]: 2025-11-22 05:26:56.838232071 +0000 UTC m=+0.159719752 container attach aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:26:56 compute-0 sudo[100298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:56 compute-0 sudo[100298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:56 compute-0 sudo[100298]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:56 compute-0 sudo[100324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:26:56 compute-0 sudo[100324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:56 compute-0 ceph-mon[75840]: 4.3 scrub starts
Nov 22 05:26:56 compute-0 ceph-mon[75840]: 4.3 scrub ok
Nov 22 05:26:56 compute-0 ceph-mon[75840]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:26:56 compute-0 ceph-mon[75840]: osdmap e45: 3 total, 3 up, 3 in
Nov 22 05:26:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:26:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 22 05:26:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.320119565 +0000 UTC m=+0.047992192 container create 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 05:26:57 compute-0 systemd[1]: Started libpod-conmon-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope.
Nov 22 05:26:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.299588863 +0000 UTC m=+0.027461500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.403219909 +0000 UTC m=+0.131092536 container init 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.414127314 +0000 UTC m=+0.141999951 container start 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.418106164 +0000 UTC m=+0.145978861 container attach 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:57 compute-0 goofy_golick[100424]: 167 167
Nov 22 05:26:57 compute-0 systemd[1]: libpod-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope: Deactivated successfully.
Nov 22 05:26:57 compute-0 conmon[100424]: conmon 2684653301d781c572d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope/container/memory.events
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.422449592 +0000 UTC m=+0.150322199 container died 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-36cacda25cb0581d93346512f43975724954134e68d20479fa89f4419d898e7c-merged.mount: Deactivated successfully.
Nov 22 05:26:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 05:26:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759638' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:26:57 compute-0 goofy_satoshi[100279]: 
Nov 22 05:26:57 compute-0 goofy_satoshi[100279]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":180,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":45,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84213760,"bytes_avail":64327712768,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-22T05:26:55.676392+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b968293a-a6bf-4f5b-a8de-544606057d9d":{"message":"Global Recovery Event (5s)\n      [===================.........] (remaining: 2s)","progress":0.68947368860244751,"add_to_ceph_s":true}}}
Nov 22 05:26:57 compute-0 podman[100407]: 2025-11-22 05:26:57.48399352 +0000 UTC m=+0.211866157 container remove 2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_golick, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:26:57 compute-0 systemd[1]: libpod-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope: Deactivated successfully.
Nov 22 05:26:57 compute-0 podman[100231]: 2025-11-22 05:26:57.488295857 +0000 UTC m=+0.809783528 container died aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:57 compute-0 systemd[1]: libpod-conmon-2684653301d781c572d06d278c68cae891fd68eb5c95f2aeb2de2807430b5538.scope: Deactivated successfully.
Nov 22 05:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad4fba946b05bb8727fd9c0f92dc45648004009bc49f85b03c656ccf506eca2e-merged.mount: Deactivated successfully.
Nov 22 05:26:57 compute-0 podman[100231]: 2025-11-22 05:26:57.535379889 +0000 UTC m=+0.856867560 container remove aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 22 05:26:57 compute-0 systemd[1]: libpod-conmon-aadeacf398930f64f0cdb023830a3c37cd88faa5222209a912b02fd6b9f9e086.scope: Deactivated successfully.
Nov 22 05:26:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 22 05:26:57 compute-0 sudo[100207]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 22 05:26:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 22 05:26:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.1b( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.11( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.16( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.8( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.b( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1f( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.2( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1c( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.f( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.1d( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.14( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.1f( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.13( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.11( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[6.f( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.18( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.19( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [0] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [0] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [2] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=45) [2] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=42/28 lis/c=42/42 les/c/f=44/44/0 sis=45) [2] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.15( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.3( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.6( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.7( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.6( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=45/46 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=45) [1] r=0 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[2.9( empty local-lis/les=45/46 n=0 ec=38/18 lis/c=38/38 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[38,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 podman[100460]: 2025-11-22 05:26:57.7172948 +0000 UTC m=+0.077988410 container create b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:57 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=45) [1] r=0 lpr=45 pi=[40,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:26:57 compute-0 podman[100460]: 2025-11-22 05:26:57.677933923 +0000 UTC m=+0.038627583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:57 compute-0 systemd[1]: Started libpod-conmon-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope.
Nov 22 05:26:57 compute-0 sudo[100497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzmwfefnxjidafkigyduddhgtjiffhcp ; /usr/bin/python3'
Nov 22 05:26:57 compute-0 sudo[100497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:57 compute-0 podman[100460]: 2025-11-22 05:26:57.838791439 +0000 UTC m=+0.199485109 container init b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:26:57 compute-0 podman[100460]: 2025-11-22 05:26:57.849170563 +0000 UTC m=+0.209864173 container start b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:57 compute-0 podman[100460]: 2025-11-22 05:26:57.853396908 +0000 UTC m=+0.214090518 container attach b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:26:57 compute-0 python3[100503]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:57 compute-0 podman[100507]: 2025-11-22 05:26:57.992033364 +0000 UTC m=+0.052258539 container create f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:58 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3759638' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:26:58 compute-0 ceph-mon[75840]: osdmap e46: 3 total, 3 up, 3 in
Nov 22 05:26:58 compute-0 systemd[1]: Started libpod-conmon-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope.
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:57.964259137 +0000 UTC m=+0.024484352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:58.091373284 +0000 UTC m=+0.151598489 container init f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:58.101536392 +0000 UTC m=+0.161761567 container start f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:58.106344111 +0000 UTC m=+0.166569336 container attach f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:26:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:26:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250960978' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:26:58 compute-0 eloquent_davinci[100522]: 
Nov 22 05:26:58 compute-0 eloquent_davinci[100522]: {"epoch":1,"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","modified":"2025-11-22T05:23:51.756901Z","created":"2025-11-22T05:23:51.756901Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 22 05:26:58 compute-0 eloquent_davinci[100522]: dumped monmap epoch 1
Nov 22 05:26:58 compute-0 systemd[1]: libpod-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope: Deactivated successfully.
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:58.772533361 +0000 UTC m=+0.832758516 container died f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:26:58 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event b968293a-a6bf-4f5b-a8de-544606057d9d (Global Recovery Event) in 10 seconds
Nov 22 05:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8372d19bf14643201d80421a7edf69a8fc6fae40cc205babdbdff7a32a355244-merged.mount: Deactivated successfully.
Nov 22 05:26:58 compute-0 podman[100507]: 2025-11-22 05:26:58.816356929 +0000 UTC m=+0.876582054 container remove f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3 (image=quay.io/ceph/ceph:v18, name=eloquent_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:26:58 compute-0 systemd[1]: libpod-conmon-f3d21df3a36ba01607d8a36c5e6bac8c0e56b799902de7314a1280a3921a27f3.scope: Deactivated successfully.
Nov 22 05:26:58 compute-0 sudo[100497]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:58 compute-0 stoic_driscoll[100501]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:26:58 compute-0 stoic_driscoll[100501]: --> relative data size: 1.0
Nov 22 05:26:58 compute-0 stoic_driscoll[100501]: --> All data devices are unavailable
Nov 22 05:26:58 compute-0 systemd[1]: libpod-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Deactivated successfully.
Nov 22 05:26:58 compute-0 systemd[1]: libpod-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Consumed 1.037s CPU time.
Nov 22 05:26:59 compute-0 podman[100585]: 2025-11-22 05:26:59.013420952 +0000 UTC m=+0.030296065 container died b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:26:59 compute-0 ceph-mon[75840]: 4.6 scrub starts
Nov 22 05:26:59 compute-0 ceph-mon[75840]: 4.6 scrub ok
Nov 22 05:26:59 compute-0 ceph-mon[75840]: 3.2 scrub starts
Nov 22 05:26:59 compute-0 ceph-mon[75840]: 3.2 scrub ok
Nov 22 05:26:59 compute-0 ceph-mon[75840]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2250960978' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b8113441781f66ded79bee4972ffd32fd1f0be6ed53e42631b15f6e091b828f-merged.mount: Deactivated successfully.
Nov 22 05:26:59 compute-0 podman[100585]: 2025-11-22 05:26:59.093314283 +0000 UTC m=+0.110189386 container remove b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:26:59 compute-0 systemd[1]: libpod-conmon-b2d0c9c2f9138c1655f781ea8106fd796b74803dd7184cf52fc861a18fd226cc.scope: Deactivated successfully.
Nov 22 05:26:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 22 05:26:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 22 05:26:59 compute-0 sudo[100324]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:59 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 22 05:26:59 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 22 05:26:59 compute-0 sudo[100601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:59 compute-0 sudo[100601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:59 compute-0 sudo[100601]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:59 compute-0 sudo[100649]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxbjpqneujigafzlotljfcqfkhhjmddx ; /usr/bin/python3'
Nov 22 05:26:59 compute-0 sudo[100649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:26:59 compute-0 sudo[100650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:26:59 compute-0 sudo[100650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:59 compute-0 sudo[100650]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:59 compute-0 sudo[100677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:26:59 compute-0 sudo[100677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:59 compute-0 sudo[100677]: pam_unix(sudo:session): session closed for user root
Nov 22 05:26:59 compute-0 sudo[100702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:26:59 compute-0 sudo[100702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:26:59 compute-0 python3[100663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:26:59 compute-0 podman[100727]: 2025-11-22 05:26:59.496867051 +0000 UTC m=+0.044955884 container create 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:26:59 compute-0 systemd[1]: Started libpod-conmon-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope.
Nov 22 05:26:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:26:59 compute-0 podman[100727]: 2025-11-22 05:26:59.479045519 +0000 UTC m=+0.027134372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:26:59 compute-0 podman[100727]: 2025-11-22 05:26:59.584939327 +0000 UTC m=+0.133028220 container init 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:26:59 compute-0 podman[100727]: 2025-11-22 05:26:59.592391834 +0000 UTC m=+0.140480667 container start 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:26:59 compute-0 podman[100727]: 2025-11-22 05:26:59.595654659 +0000 UTC m=+0.143743532 container attach 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:26:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.77317754 +0000 UTC m=+0.055564523 container create 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:26:59 compute-0 systemd[1]: Started libpod-conmon-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope.
Nov 22 05:26:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.745298191 +0000 UTC m=+0.027685254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.844920658 +0000 UTC m=+0.127307631 container init 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.854166796 +0000 UTC m=+0.136553799 container start 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:26:59 compute-0 peaceful_mestorf[100801]: 167 167
Nov 22 05:26:59 compute-0 systemd[1]: libpod-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope: Deactivated successfully.
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.858692888 +0000 UTC m=+0.141079881 container attach 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.85921026 +0000 UTC m=+0.141597233 container died 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a42e7c4cfbc01aa08cff93574476b5171b87bf878febfebade03eeb4dd2d26d-merged.mount: Deactivated successfully.
Nov 22 05:26:59 compute-0 podman[100784]: 2025-11-22 05:26:59.890132187 +0000 UTC m=+0.172519160 container remove 1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:26:59 compute-0 systemd[1]: libpod-conmon-1615af9da7d260f509cf5f4f96831b32a2f8b7daad451dc393d3990e7e73a89c.scope: Deactivated successfully.
Nov 22 05:27:00 compute-0 ceph-mon[75840]: 4.b scrub starts
Nov 22 05:27:00 compute-0 ceph-mon[75840]: 4.b scrub ok
Nov 22 05:27:00 compute-0 ceph-mon[75840]: 2.1 scrub starts
Nov 22 05:27:00 compute-0 ceph-mon[75840]: 2.1 scrub ok
Nov 22 05:27:00 compute-0 podman[100844]: 2025-11-22 05:27:00.092692225 +0000 UTC m=+0.052484315 container create 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:00 compute-0 systemd[1]: Started libpod-conmon-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope.
Nov 22 05:27:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 22 05:27:00 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3465824516' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 05:27:00 compute-0 vigorous_chatterjee[100742]: [client.openstack]
Nov 22 05:27:00 compute-0 vigorous_chatterjee[100742]:         key = AQDNSCFpAAAAABAAIxLSh4M1I5A41RBE4yCAiQ==
Nov 22 05:27:00 compute-0 vigorous_chatterjee[100742]:         caps mgr = "allow *"
Nov 22 05:27:00 compute-0 vigorous_chatterjee[100742]:         caps mon = "profile rbd"
Nov 22 05:27:00 compute-0 vigorous_chatterjee[100742]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 22 05:27:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:00 compute-0 podman[100844]: 2025-11-22 05:27:00.066901683 +0000 UTC m=+0.026693803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 22 05:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 22 05:27:00 compute-0 podman[100844]: 2025-11-22 05:27:00.178180482 +0000 UTC m=+0.137972572 container init 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:27:00 compute-0 systemd[1]: libpod-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope: Deactivated successfully.
Nov 22 05:27:00 compute-0 podman[100727]: 2025-11-22 05:27:00.180159986 +0000 UTC m=+0.728248819 container died 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:00 compute-0 podman[100844]: 2025-11-22 05:27:00.187830089 +0000 UTC m=+0.147622179 container start 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:00 compute-0 podman[100844]: 2025-11-22 05:27:00.192365462 +0000 UTC m=+0.152157592 container attach 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff0904bccb11ca7016d38c5550205b10c20d545c55befa650be9d978c58f564-merged.mount: Deactivated successfully.
Nov 22 05:27:00 compute-0 podman[100727]: 2025-11-22 05:27:00.223855241 +0000 UTC m=+0.771944064 container remove 693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d (image=quay.io/ceph/ceph:v18, name=vigorous_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:27:00 compute-0 systemd[1]: libpod-conmon-693472b7adc967a4344508f05b3be49697f4c9d902595664082abe7d70dbf27d.scope: Deactivated successfully.
Nov 22 05:27:00 compute-0 sudo[100649]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:00 compute-0 adoring_shaw[100860]: {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     "0": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "devices": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "/dev/loop3"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             ],
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_name": "ceph_lv0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_size": "21470642176",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "name": "ceph_lv0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "tags": {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.crush_device_class": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.encrypted": "0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_id": "0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.vdo": "0"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             },
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "vg_name": "ceph_vg0"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         }
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     ],
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     "1": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "devices": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "/dev/loop4"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             ],
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_name": "ceph_lv1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_size": "21470642176",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "name": "ceph_lv1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "tags": {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.crush_device_class": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.encrypted": "0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_id": "1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.vdo": "0"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             },
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "vg_name": "ceph_vg1"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         }
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     ],
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     "2": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "devices": [
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "/dev/loop5"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             ],
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_name": "ceph_lv2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_size": "21470642176",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "name": "ceph_lv2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "tags": {
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.crush_device_class": "",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.encrypted": "0",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osd_id": "2",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:                 "ceph.vdo": "0"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             },
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "type": "block",
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:             "vg_name": "ceph_vg2"
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:         }
Nov 22 05:27:00 compute-0 adoring_shaw[100860]:     ]
Nov 22 05:27:00 compute-0 adoring_shaw[100860]: }
Nov 22 05:27:01 compute-0 systemd[1]: libpod-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope: Deactivated successfully.
Nov 22 05:27:01 compute-0 podman[100844]: 2025-11-22 05:27:01.021149727 +0000 UTC m=+0.980941937 container died 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:27:01 compute-0 ceph-mon[75840]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:01 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3465824516' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 05:27:01 compute-0 ceph-mon[75840]: 5.6 scrub starts
Nov 22 05:27:01 compute-0 ceph-mon[75840]: 5.6 scrub ok
Nov 22 05:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d49418e5b390ca8f07427453b1ec908253908b2169827271642fa237bdd0e5bc-merged.mount: Deactivated successfully.
Nov 22 05:27:01 compute-0 podman[100844]: 2025-11-22 05:27:01.084683489 +0000 UTC m=+1.044475579 container remove 6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_shaw, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:01 compute-0 systemd[1]: libpod-conmon-6106bde7af69bcf405004953ab473d89f8351cb9408e56de180e7645af4173e7.scope: Deactivated successfully.
Nov 22 05:27:01 compute-0 sudo[100702]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:01 compute-0 sudo[100895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:01 compute-0 sudo[100895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:01 compute-0 sudo[100895]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:01 compute-0 sudo[100932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:01 compute-0 sudo[100932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:01 compute-0 sudo[100932]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:01 compute-0 sudo[100987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:01 compute-0 sudo[100987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:01 compute-0 sudo[100987]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:01 compute-0 sudo[101035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:27:01 compute-0 sudo[101035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:01 compute-0 sudo[101155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjemdvjhslsmovqescxooevidmmrdxhe ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789221.252235-36679-197720406409152/async_wrapper.py j522531097896 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789221.252235-36679-197720406409152/AnsiballZ_command.py _'
Nov 22 05:27:01 compute-0 sudo[101155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 22 05:27:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 22 05:27:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:01 compute-0 ansible-async_wrapper.py[101164]: Invoked with j522531097896 30 /home/zuul/.ansible/tmp/ansible-tmp-1763789221.252235-36679-197720406409152/AnsiballZ_command.py _
Nov 22 05:27:01 compute-0 ansible-async_wrapper.py[101201]: Starting module and watcher
Nov 22 05:27:01 compute-0 ansible-async_wrapper.py[101201]: Start watching 101202 (30)
Nov 22 05:27:01 compute-0 ansible-async_wrapper.py[101202]: Start module (101202)
Nov 22 05:27:01 compute-0 ansible-async_wrapper.py[101164]: Return async_wrapper task started.
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.72428854 +0000 UTC m=+0.058934330 container create fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:27:01 compute-0 sudo[101155]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:01 compute-0 systemd[1]: Started libpod-conmon-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope.
Nov 22 05:27:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.707398249 +0000 UTC m=+0.042044059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.818052014 +0000 UTC m=+0.152697884 container init fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.830014993 +0000 UTC m=+0.164660793 container start fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.834591077 +0000 UTC m=+0.169236917 container attach fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:27:01 compute-0 trusting_lovelace[101206]: 167 167
Nov 22 05:27:01 compute-0 systemd[1]: libpod-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope: Deactivated successfully.
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.83785074 +0000 UTC m=+0.172496580 container died fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:01 compute-0 python3[101203]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b58bf21f56f54a7722dabc7c45044c1eecdfcc0c9a1bfee52a2b2ff12868299-merged.mount: Deactivated successfully.
Nov 22 05:27:01 compute-0 podman[101186]: 2025-11-22 05:27:01.883397336 +0000 UTC m=+0.218043136 container remove fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_lovelace, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:27:01 compute-0 systemd[1]: libpod-conmon-fbec170f9620aee570f8dd5ff421b01e8a60df844bd7303f2c6ee7ac501a7e9d.scope: Deactivated successfully.
Nov 22 05:27:01 compute-0 anacron[30771]: Job `cron.weekly' started
Nov 22 05:27:01 compute-0 anacron[30771]: Job `cron.weekly' terminated
Nov 22 05:27:01 compute-0 podman[101215]: 2025-11-22 05:27:01.926688782 +0000 UTC m=+0.052847771 container create 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:01 compute-0 systemd[1]: Started libpod-conmon-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope.
Nov 22 05:27:01 compute-0 podman[101215]: 2025-11-22 05:27:01.902427656 +0000 UTC m=+0.028586705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 podman[101215]: 2025-11-22 05:27:02.016428246 +0000 UTC m=+0.142587245 container init 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:27:02 compute-0 podman[101215]: 2025-11-22 05:27:02.02413702 +0000 UTC m=+0.150295979 container start 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:27:02 compute-0 podman[101215]: 2025-11-22 05:27:02.028824825 +0000 UTC m=+0.154983814 container attach 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:27:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:02 compute-0 podman[101251]: 2025-11-22 05:27:02.06096748 +0000 UTC m=+0.046430748 container create e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:02 compute-0 systemd[1]: Started libpod-conmon-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope.
Nov 22 05:27:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:02 compute-0 podman[101251]: 2025-11-22 05:27:02.040058388 +0000 UTC m=+0.025521686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:02 compute-0 podman[101251]: 2025-11-22 05:27:02.166214762 +0000 UTC m=+0.151678070 container init e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:27:02 compute-0 podman[101251]: 2025-11-22 05:27:02.177702872 +0000 UTC m=+0.163166140 container start e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:27:02 compute-0 podman[101251]: 2025-11-22 05:27:02.181807755 +0000 UTC m=+0.167271043 container attach e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 22 05:27:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:02 compute-0 quizzical_boyd[101243]: 
Nov 22 05:27:02 compute-0 quizzical_boyd[101243]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 05:27:02 compute-0 systemd[1]: libpod-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope: Deactivated successfully.
Nov 22 05:27:02 compute-0 podman[101215]: 2025-11-22 05:27:02.598688043 +0000 UTC m=+0.724847062 container died 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5035c155c2856a14a03cf30f3ea267f1c26eea983be09eb2ee0ae101a657562-merged.mount: Deactivated successfully.
Nov 22 05:27:02 compute-0 podman[101215]: 2025-11-22 05:27:02.65219722 +0000 UTC m=+0.778356209 container remove 1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053 (image=quay.io/ceph/ceph:v18, name=quizzical_boyd, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:02 compute-0 systemd[1]: libpod-conmon-1dff08d64b7f356f55a43b87ac8a4832d18fb76226aa0f523db3085429a88053.scope: Deactivated successfully.
Nov 22 05:27:02 compute-0 ansible-async_wrapper.py[101202]: Module complete (101202)
Nov 22 05:27:02 compute-0 sudo[101357]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgcbgknltoiwpvipgertkivhsgdpscj ; /usr/bin/python3'
Nov 22 05:27:02 compute-0 sudo[101357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:03 compute-0 python3[101362]: ansible-ansible.legacy.async_status Invoked with jid=j522531097896.101164 mode=status _async_dir=/root/.ansible_async
Nov 22 05:27:03 compute-0 sudo[101357]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 ceph-mon[75840]: 3.4 scrub starts
Nov 22 05:27:03 compute-0 ceph-mon[75840]: 3.4 scrub ok
Nov 22 05:27:03 compute-0 ceph-mon[75840]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:03 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 22 05:27:03 compute-0 busy_panini[101268]: {
Nov 22 05:27:03 compute-0 busy_panini[101268]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_id": 1,
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "type": "bluestore"
Nov 22 05:27:03 compute-0 busy_panini[101268]:     },
Nov 22 05:27:03 compute-0 busy_panini[101268]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_id": 2,
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "type": "bluestore"
Nov 22 05:27:03 compute-0 busy_panini[101268]:     },
Nov 22 05:27:03 compute-0 busy_panini[101268]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_id": 0,
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:03 compute-0 busy_panini[101268]:         "type": "bluestore"
Nov 22 05:27:03 compute-0 busy_panini[101268]:     }
Nov 22 05:27:03 compute-0 busy_panini[101268]: }
Nov 22 05:27:03 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 22 05:27:03 compute-0 systemd[1]: libpod-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope: Deactivated successfully.
Nov 22 05:27:03 compute-0 podman[101251]: 2025-11-22 05:27:03.143367493 +0000 UTC m=+1.128830791 container died e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99284807c2057158690f9a3a4654748f5fe2851f235d578a77a725418c27252-merged.mount: Deactivated successfully.
Nov 22 05:27:03 compute-0 podman[101251]: 2025-11-22 05:27:03.215184733 +0000 UTC m=+1.200648011 container remove e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_panini, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:03 compute-0 sudo[101439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjqwkflyueauvzcagrbfzxpfllaqzewo ; /usr/bin/python3'
Nov 22 05:27:03 compute-0 sudo[101439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:03 compute-0 systemd[1]: libpod-conmon-e7c52c869d0c9e76f654991e57b80d149b8995e1178862c354ec735d708f327b.scope: Deactivated successfully.
Nov 22 05:27:03 compute-0 sudo[101035]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:03 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:03 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 05:27:03 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 05:27:03 compute-0 sudo[101442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:03 compute-0 python3[101441]: ansible-ansible.legacy.async_status Invoked with jid=j522531097896.101164 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 05:27:03 compute-0 sudo[101442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:03 compute-0 sudo[101442]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 sudo[101439]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 sudo[101467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:03 compute-0 sudo[101467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:03 compute-0 sudo[101467]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 sudo[101492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:03 compute-0 sudo[101492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:03 compute-0 sudo[101492]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:03 compute-0 sudo[101517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:27:03 compute-0 sudo[101517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:03 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 10 completed events
Nov 22 05:27:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:27:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:03 compute-0 sudo[101594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivynqjjnhhyfzoidvmoezdcthvhdbnbw ; /usr/bin/python3'
Nov 22 05:27:03 compute-0 sudo[101594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:03 compute-0 podman[101608]: 2025-11-22 05:27:03.906404716 +0000 UTC m=+0.045550748 container create 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:27:03 compute-0 systemd[1]: Started libpod-conmon-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope.
Nov 22 05:27:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:03 compute-0 podman[101608]: 2025-11-22 05:27:03.891680794 +0000 UTC m=+0.030826846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:04 compute-0 podman[101608]: 2025-11-22 05:27:04.001143552 +0000 UTC m=+0.140289644 container init 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:27:04 compute-0 podman[101608]: 2025-11-22 05:27:04.011196188 +0000 UTC m=+0.150342260 container start 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 05:27:04 compute-0 focused_elbakyan[101624]: 167 167
Nov 22 05:27:04 compute-0 python3[101602]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:04 compute-0 systemd[1]: libpod-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope: Deactivated successfully.
Nov 22 05:27:04 compute-0 podman[101608]: 2025-11-22 05:27:04.015776791 +0000 UTC m=+0.154922833 container attach 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:04 compute-0 podman[101608]: 2025-11-22 05:27:04.020773654 +0000 UTC m=+0.159919696 container died 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-268f688e7b7042c4bbdb36faeb6b57fe3992c1b0c2329a8f89183ab2d2d489f4-merged.mount: Deactivated successfully.
Nov 22 05:27:04 compute-0 podman[101608]: 2025-11-22 05:27:04.066776542 +0000 UTC m=+0.205922604 container remove 6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:04 compute-0 systemd[1]: libpod-conmon-6ebde17d00044a78225062773fc5207b5d33624848f9004fe6f9cab593cb9886.scope: Deactivated successfully.
Nov 22 05:27:04 compute-0 podman[101630]: 2025-11-22 05:27:04.094486047 +0000 UTC m=+0.054546171 container create 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:27:04 compute-0 systemd[1]: Started libpod-conmon-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope.
Nov 22 05:27:04 compute-0 systemd[1]: Reloading.
Nov 22 05:27:04 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 22 05:27:04 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 22 05:27:04 compute-0 podman[101630]: 2025-11-22 05:27:04.077563575 +0000 UTC m=+0.037623749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:04 compute-0 systemd-rc-local-generator[101683]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:27:04 compute-0 systemd-sysv-generator[101688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:04 compute-0 ceph-mon[75840]: 5.8 scrub starts
Nov 22 05:27:04 compute-0 ceph-mon[75840]: 5.8 scrub ok
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pzxxqv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:04 compute-0 ceph-mon[75840]: Deploying daemon rgw.rgw.compute-0.pzxxqv on compute-0
Nov 22 05:27:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:04 compute-0 podman[101630]: 2025-11-22 05:27:04.422459321 +0000 UTC m=+0.382519505 container init 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:04 compute-0 podman[101630]: 2025-11-22 05:27:04.435710769 +0000 UTC m=+0.395770933 container start 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:27:04 compute-0 podman[101630]: 2025-11-22 05:27:04.439803771 +0000 UTC m=+0.399863925 container attach 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:27:04 compute-0 systemd[1]: Reloading.
Nov 22 05:27:04 compute-0 systemd-rc-local-generator[101729]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:27:04 compute-0 systemd-sysv-generator[101733]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:27:04 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 22 05:27:04 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 22 05:27:04 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.pzxxqv for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:05 compute-0 sad_babbage[101659]: 
Nov 22 05:27:05 compute-0 sad_babbage[101659]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 05:27:05 compute-0 systemd[1]: libpod-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope: Deactivated successfully.
Nov 22 05:27:05 compute-0 podman[101804]: 2025-11-22 05:27:05.103319191 +0000 UTC m=+0.066769816 container create 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:27:05 compute-0 podman[101816]: 2025-11-22 05:27:05.120109599 +0000 UTC m=+0.033680110 container died 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9a489d8218a54da8f514f09db64453a557cda02701675fa67968ca0be8b8b03-merged.mount: Deactivated successfully.
Nov 22 05:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3e63312346980b465f8704f82ad258789ff15b937c34df2941b8593c4294cb/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pzxxqv supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:05 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Nov 22 05:27:05 compute-0 podman[101804]: 2025-11-22 05:27:05.064940896 +0000 UTC m=+0.028391561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 22 05:27:05 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Nov 22 05:27:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 22 05:27:05 compute-0 podman[101804]: 2025-11-22 05:27:05.180707145 +0000 UTC m=+0.144157730 container init 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:27:05 compute-0 podman[101816]: 2025-11-22 05:27:05.185113425 +0000 UTC m=+0.098683946 container remove 3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d (image=quay.io/ceph/ceph:v18, name=sad_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:27:05 compute-0 podman[101804]: 2025-11-22 05:27:05.18711518 +0000 UTC m=+0.150565775 container start 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:05 compute-0 bash[101804]: 0dd7e6b627830c8965e0ed7b9672a36c0bb8d90c127558e79cea3f77f481b2df
Nov 22 05:27:05 compute-0 systemd[1]: libpod-conmon-3fc4af83b2d149e02280a024df7c4b67596d48d2f11ae2d71877eeb3f48d2a5d.scope: Deactivated successfully.
Nov 22 05:27:05 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.pzxxqv for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:27:05 compute-0 sudo[101594]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:05 compute-0 sudo[101517]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:05 compute-0 radosgw[101838]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:27:05 compute-0 radosgw[101838]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 22 05:27:05 compute-0 radosgw[101838]: framework: beast
Nov 22 05:27:05 compute-0 radosgw[101838]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 22 05:27:05 compute-0 radosgw[101838]: init_numa not setting numa affinity
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 1a664a35-5bb3-4869-8e4e-8e75b7bda84f (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:05 compute-0 ceph-mon[75840]: 5.a scrub starts
Nov 22 05:27:05 compute-0 ceph-mon[75840]: 5.a scrub ok
Nov 22 05:27:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 05:27:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 05:27:05 compute-0 sudo[101900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:05 compute-0 sudo[101900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:05 compute-0 sudo[101900]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:05 compute-0 sudo[101925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:05 compute-0 sudo[101925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:05 compute-0 sudo[101925]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:05 compute-0 sudo[101950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:05 compute-0 sudo[101950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:05 compute-0 sudo[101950]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:05 compute-0 sudo[101975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1
Nov 22 05:27:05 compute-0 sudo[101975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:05 compute-0 podman[102041]: 2025-11-22 05:27:05.960395634 +0000 UTC m=+0.056367462 container create 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:06 compute-0 systemd[1]: Started libpod-conmon-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope.
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:05.931719618 +0000 UTC m=+0.027691526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:06 compute-0 sudo[102081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnzixfgnxtwhzgobzerycvipcfhkuis ; /usr/bin/python3'
Nov 22 05:27:06 compute-0 sudo[102081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:06.074993948 +0000 UTC m=+0.170965826 container init 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:06.088957782 +0000 UTC m=+0.184929610 container start 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:06.092680797 +0000 UTC m=+0.188652685 container attach 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:27:06 compute-0 ecstatic_wescoff[102083]: 167 167
Nov 22 05:27:06 compute-0 systemd[1]: libpod-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope: Deactivated successfully.
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:06.099991681 +0000 UTC m=+0.195963519 container died 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:27:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5beddd90358ee8183e9fabe4f5a39620ddaf266fb03c2edef7a42d318af69935-merged.mount: Deactivated successfully.
Nov 22 05:27:06 compute-0 podman[102041]: 2025-11-22 05:27:06.140635598 +0000 UTC m=+0.236607426 container remove 831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:06 compute-0 systemd[1]: libpod-conmon-831c8023ea5da89b10035b6681a1a2e8f7fb887aff33e62c82944358301fb35f.scope: Deactivated successfully.
Nov 22 05:27:06 compute-0 systemd[1]: Reloading.
Nov 22 05:27:06 compute-0 python3[102085]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 22 05:27:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 22 05:27:06 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 22 05:27:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 22 05:27:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 05:27:06 compute-0 systemd-rc-local-generator[102138]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:27:06 compute-0 systemd-sysv-generator[102142]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:27:06 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:06 compute-0 podman[102106]: 2025-11-22 05:27:06.302149409 +0000 UTC m=+0.060858003 container create 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 3.b scrub starts
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 3.b scrub ok
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 4.c deep-scrub starts
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 5.b scrub starts
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 4.c deep-scrub ok
Nov 22 05:27:06 compute-0 ceph-mon[75840]: 5.b scrub ok
Nov 22 05:27:06 compute-0 ceph-mon[75840]: Saving service rgw.rgw spec with placement compute-0
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.dntioh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:06 compute-0 ceph-mon[75840]: Deploying daemon mds.cephfs.compute-0.dntioh on compute-0
Nov 22 05:27:06 compute-0 ceph-mon[75840]: osdmap e47: 3 total, 3 up, 3 in
Nov 22 05:27:06 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 05:27:06 compute-0 podman[102106]: 2025-11-22 05:27:06.267544629 +0000 UTC m=+0.026253273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:06 compute-0 systemd[1]: Started libpod-conmon-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope.
Nov 22 05:27:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:06 compute-0 podman[102106]: 2025-11-22 05:27:06.558877377 +0000 UTC m=+0.317586051 container init 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:27:06 compute-0 systemd[1]: Reloading.
Nov 22 05:27:06 compute-0 podman[102106]: 2025-11-22 05:27:06.56697832 +0000 UTC m=+0.325686924 container start 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:06 compute-0 podman[102106]: 2025-11-22 05:27:06.569698912 +0000 UTC m=+0.328407566 container attach 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:06 compute-0 systemd-rc-local-generator[102183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:27:06 compute-0 systemd-sysv-generator[102186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:27:06 compute-0 ansible-async_wrapper.py[101201]: Done in kid B.
Nov 22 05:27:06 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.dntioh for 13fdadc6-d566-5465-9ac8-a148ef130da1...
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:07 compute-0 podman[102265]: 2025-11-22 05:27:07.09170455 +0000 UTC m=+0.055114674 container create 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:27:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:07 compute-0 recursing_lewin[102155]: 
Nov 22 05:27:07 compute-0 recursing_lewin[102155]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 22 05:27:07 compute-0 systemd[1]: libpod-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope: Deactivated successfully.
Nov 22 05:27:07 compute-0 podman[102106]: 2025-11-22 05:27:07.12453483 +0000 UTC m=+0.883243474 container died 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:27:07 compute-0 podman[102265]: 2025-11-22 05:27:07.063117225 +0000 UTC m=+0.026527329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-361264f3cf9549a2ecb89ae4cdf65c600f0765add394adadba8db0c3108d6ae4-merged.mount: Deactivated successfully.
Nov 22 05:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86d3d285958fd520de265e80800e25cb769df0896e5bb041467a02c667c821b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.dntioh supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:07 compute-0 podman[102265]: 2025-11-22 05:27:07.189807031 +0000 UTC m=+0.153217165 container init 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:27:07 compute-0 podman[102106]: 2025-11-22 05:27:07.194232102 +0000 UTC m=+0.952940706 container remove 6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984 (image=quay.io/ceph/ceph:v18, name=recursing_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:07 compute-0 podman[102265]: 2025-11-22 05:27:07.197789682 +0000 UTC m=+0.161199776 container start 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:27:07 compute-0 bash[102265]: 3032b7ea47665f667d27b9df452a97e38e594e6f45d0de1c012bc0fcf00601bf
Nov 22 05:27:07 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.dntioh for 13fdadc6-d566-5465-9ac8-a148ef130da1.
Nov 22 05:27:07 compute-0 sudo[102081]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 systemd[1]: libpod-conmon-6a6ddedf2eb8ef463737cfcfc224c049b3353c7f7372f2333e654967da259984.scope: Deactivated successfully.
Nov 22 05:27:07 compute-0 ceph-mds[102299]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:27:07 compute-0 ceph-mds[102299]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 22 05:27:07 compute-0 ceph-mds[102299]: main not setting numa affinity
Nov 22 05:27:07 compute-0 ceph-mds[102299]: pidfile_write: ignore empty --pid-file
Nov 22 05:27:07 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mds-cephfs-compute-0-dntioh[102286]: starting mds.cephfs.compute-0.dntioh at 
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 2 from mon.0
Nov 22 05:27:07 compute-0 sudo[101975]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 05:27:07 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 77836df8-3f54-48e9-abc3-3ced0db86ca6 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 05:27:07 compute-0 ceph-mon[75840]: osdmap e48: 3 total, 3 up, 3 in
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 new map
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T05:26:51.740040+0000
                                           modified        2025-11-22T05:26:51.740073+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.dntioh{-1:14269} state up:standby seq 1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 3 from mon.0
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Monitors have assigned me to become a standby.
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:boot
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] as mds.0
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dntioh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.dntioh"} v 0) v1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dntioh"}]: dispatch
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e3 all = 0
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e4 new map
Nov 22 05:27:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T05:26:51.740040+0000
                                           modified        2025-11-22T05:27:07.358183+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.dntioh{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 4 from mon.0
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x1
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:creating}
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x100
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x600
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x601
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x602
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x603
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x604
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x605
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x606
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x607
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x608
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.cache creating system inode with ino:0x609
Nov 22 05:27:07 compute-0 sudo[102318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:07 compute-0 sudo[102318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:07 compute-0 sudo[102318]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 ceph-mds[102299]: mds.0.4 creating_done
Nov 22 05:27:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.dntioh is now active in filesystem cephfs as rank 0
Nov 22 05:27:07 compute-0 sudo[102354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:27:07 compute-0 sudo[102354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:07 compute-0 sudo[102354]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 sudo[102379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:07 compute-0 sudo[102379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:07 compute-0 sudo[102379]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 sudo[102404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:07 compute-0 sudo[102404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:07 compute-0 sudo[102404]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 sudo[102429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:07 compute-0 sudo[102429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v111: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:07 compute-0 sudo[102429]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 22 05:27:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 22 05:27:07 compute-0 sudo[102454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:27:07 compute-0 sudo[102454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:08 compute-0 sudo[102549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqplxosykfotctkjhuuqnikvkngomiyr ; /usr/bin/python3'
Nov 22 05:27:08 compute-0 sudo[102549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:08 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 22 05:27:08 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 22 05:27:08 compute-0 python3[102556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 22 05:27:08 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 22 05:27:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 05:27:08 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:08 compute-0 podman[102578]: 2025-11-22 05:27:08.320972704 +0000 UTC m=+0.065368774 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:08 compute-0 ceph-mon[75840]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:boot
Nov 22 05:27:08 compute-0 ceph-mon[75840]: daemon mds.cephfs.compute-0.dntioh assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 05:27:08 compute-0 ceph-mon[75840]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 05:27:08 compute-0 ceph-mon[75840]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 05:27:08 compute-0 ceph-mon[75840]: Cluster is now healthy
Nov 22 05:27:08 compute-0 ceph-mon[75840]: fsmap cephfs:0 1 up:standby
Nov 22 05:27:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.dntioh"}]: dispatch
Nov 22 05:27:08 compute-0 ceph-mon[75840]: fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:creating}
Nov 22 05:27:08 compute-0 ceph-mon[75840]: daemon mds.cephfs.compute-0.dntioh is now active in filesystem cephfs as rank 0
Nov 22 05:27:08 compute-0 ceph-mon[75840]: osdmap e49: 3 total, 3 up, 3 in
Nov 22 05:27:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e5 new map
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T05:26:51.740040+0000
                                           modified        2025-11-22T05:27:08.364438+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.dntioh{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 22 05:27:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh Updating MDS map to version 5 from mon.0
Nov 22 05:27:08 compute-0 ceph-mds[102299]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 22 05:27:08 compute-0 ceph-mds[102299]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 22 05:27:08 compute-0 ceph-mds[102299]: mds.0.4 recovery_done -- successful recovery!
Nov 22 05:27:08 compute-0 ceph-mds[102299]: mds.0.4 active_start
Nov 22 05:27:08 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:active
Nov 22 05:27:08 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:active}
Nov 22 05:27:08 compute-0 podman[102592]: 2025-11-22 05:27:08.378360118 +0000 UTC m=+0.069783665 container create 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:27:08 compute-0 systemd[1]: Started libpod-conmon-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope.
Nov 22 05:27:08 compute-0 podman[102578]: 2025-11-22 05:27:08.44184752 +0000 UTC m=+0.186243550 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:27:08 compute-0 podman[102592]: 2025-11-22 05:27:08.350638833 +0000 UTC m=+0.042062380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:08 compute-0 podman[102592]: 2025-11-22 05:27:08.478785482 +0000 UTC m=+0.170209019 container init 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:27:08 compute-0 podman[102592]: 2025-11-22 05:27:08.48624563 +0000 UTC m=+0.177669157 container start 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:27:08 compute-0 podman[102592]: 2025-11-22 05:27:08.490880785 +0000 UTC m=+0.182304332 container attach 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:27:08 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Nov 22 05:27:08 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Nov 22 05:27:08 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 12 completed events
Nov 22 05:27:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:27:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:09 compute-0 sleepy_panini[102615]: 
Nov 22 05:27:09 compute-0 sleepy_panini[102615]: [{"container_id": "c4eec30b75a2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.46%", "created": "2025-11-22T05:25:10.945432Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-22T05:25:11.014820Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600973Z", "memory_usage": 11806965, "ports": [], "service_name": "crash", "started": "2025-11-22T05:25:10.830213Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@crash.compute-0", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-0.dntioh", "daemon_name": "mds.cephfs.compute-0.dntioh", "daemon_type": "mds", "events": ["2025-11-22T05:27:07.278515Z daemon:mds.cephfs.compute-0.dntioh [INFO] \"Deployed mds.cephfs.compute-0.dntioh on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "73442774e724", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.50%", "created": "2025-11-22T05:23:59.278535Z", "daemon_id": "compute-0.mscchl", "daemon_name": "mgr.compute-0.mscchl", "daemon_type": "mgr", "events": ["2025-11-22T05:25:16.530913Z daemon:mgr.compute-0.mscchl [INFO] \"Reconfigured mgr.compute-0.mscchl on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600830Z", "memory_usage": 549348966, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-22T05:23:59.138975Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mgr.compute-0.mscchl", "version": "18.2.7"}, {"container_id": "d2c85725d384", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.10%", "created": "2025-11-22T05:23:53.991399Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-22T05:25:15.809351Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.600653Z", "memory_request": 2147483648, "memory_usage": 39122370, "ports": [], "service_name": "mon", "started": "2025-11-22T05:23:56.813694Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@mon.compute-0", "version": "18.2.7"}, {"container_id": "49ecd6cb38e9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.55%", "created": "2025-11-22T05:25:40.511266Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-22T05:25:40.576194Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601130Z", "memory_request": 4294967296, "memory_usage": 61205381, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:40.379797Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.0", "version": "18.2.7"}, {"container_id": "4bf032245a15", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.64%", "created": "2025-11-22T05:25:45.841430Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-22T05:25:45.992780Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601262Z", "memory_request": 4294967296, "memory_usage": 61383639, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:45.632228Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.1", "version": "18.2.7"}, {"container_id": "320c74d22126", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.63%", "created": "2025-11-22T05:25:52.068622Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-22T05:25:52.180845Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T05:26:56.601392Z", "memory_request": 4294967296, "memory_usage": 60639150, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T05:25:51.901750Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-13fdadc6-d566-5465-9ac8-a148ef130da1@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.pzxxqv", "daemon_name": "rgw.rgw.compute-0.pzxxqv", "daemon_type": "rgw", "events": ["2025-11-22T05:27:05.269512Z daemon:rgw.rgw.compute-0.pzxxqv [INFO] \"Deployed rgw.rgw.compute-0.pzxxqv on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 22 05:27:09 compute-0 systemd[1]: libpod-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope: Deactivated successfully.
Nov 22 05:27:09 compute-0 podman[102592]: 2025-11-22 05:27:09.06770049 +0000 UTC m=+0.759124007 container died 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:27:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-836af71b3678935a554e41ba2ffce2dad47a07978bd077776c0f53988a8b57c4-merged.mount: Deactivated successfully.
Nov 22 05:27:09 compute-0 podman[102592]: 2025-11-22 05:27:09.109578754 +0000 UTC m=+0.801002311 container remove 873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720 (image=quay.io/ceph/ceph:v18, name=sleepy_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:09 compute-0 systemd[1]: libpod-conmon-873e16aef11ef0d7ac170d74b86d7fca8542c09a4183df8da4a4610f2b83c720.scope: Deactivated successfully.
Nov 22 05:27:09 compute-0 sudo[102549]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:09 compute-0 sudo[102454]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:09 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:27:09 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 62597db3-339b-4b60-9b3a-b3043ed8ca5d does not exist
Nov 22 05:27:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ca0b2785-b168-4072-b6da-a447e2a1e786 does not exist
Nov 22 05:27:09 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f9a9256c-4b26-48ec-bcb9-99df7376f551 does not exist
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:09 compute-0 sudo[102790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:09 compute-0 sudo[102790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:09 compute-0 sudo[102790]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 22 05:27:09 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 22 05:27:09 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:09 compute-0 sudo[102815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:09 compute-0 sudo[102815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:09 compute-0 sudo[102815]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:09 compute-0 ceph-mon[75840]: pgmap v111: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:09 compute-0 ceph-mon[75840]: 3.d scrub starts
Nov 22 05:27:09 compute-0 ceph-mon[75840]: 3.d scrub ok
Nov 22 05:27:09 compute-0 ceph-mon[75840]: 5.d scrub starts
Nov 22 05:27:09 compute-0 ceph-mon[75840]: 5.d scrub ok
Nov 22 05:27:09 compute-0 ceph-mon[75840]: mds.? [v2:192.168.122.100:6814/1849881061,v1:192.168.122.100:6815/1849881061] up:active
Nov 22 05:27:09 compute-0 ceph-mon[75840]: fsmap cephfs:1 {0=cephfs.compute-0.dntioh=up:active}
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 05:27:09 compute-0 ceph-mon[75840]: osdmap e50: 3 total, 3 up, 3 in
Nov 22 05:27:09 compute-0 sudo[102842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:09 compute-0 sudo[102842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:09 compute-0 sudo[102842]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:09 compute-0 sudo[102867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:27:09 compute-0 sudo[102867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v114: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.876435712 +0000 UTC m=+0.057812874 container create f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:09 compute-0 systemd[1]: Started libpod-conmon-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope.
Nov 22 05:27:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.848882422 +0000 UTC m=+0.030259624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.961286056 +0000 UTC m=+0.142663308 container init f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.971925505 +0000 UTC m=+0.153302667 container start f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.97654547 +0000 UTC m=+0.157922712 container attach f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:27:09 compute-0 infallible_swartz[102949]: 167 167
Nov 22 05:27:09 compute-0 systemd[1]: libpod-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope: Deactivated successfully.
Nov 22 05:27:09 compute-0 podman[102933]: 2025-11-22 05:27:09.978201907 +0000 UTC m=+0.159579069 container died f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed5e1b8f5fb36b6e1ee923c627a4e65263354c48255deba193ee33624445d67b-merged.mount: Deactivated successfully.
Nov 22 05:27:10 compute-0 podman[102933]: 2025-11-22 05:27:10.029748529 +0000 UTC m=+0.211125691 container remove f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:10 compute-0 systemd[1]: libpod-conmon-f5c5d44651a4d7155fcb3f8a28abbd20e90f976b9ca67e43c07b8f375ecd1b7a.scope: Deactivated successfully.
Nov 22 05:27:10 compute-0 sudo[102991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxlldqecgxkbphrejmknzlnurfwjufz ; /usr/bin/python3'
Nov 22 05:27:10 compute-0 sudo[102991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 22 05:27:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 22 05:27:10 compute-0 python3[102993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:10 compute-0 podman[103005]: 2025-11-22 05:27:10.262771303 +0000 UTC m=+0.048567327 container create edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:27:10 compute-0 podman[102999]: 2025-11-22 05:27:10.271144811 +0000 UTC m=+0.067238677 container create 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 22 05:27:10 compute-0 systemd[1]: Started libpod-conmon-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope.
Nov 22 05:27:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 22 05:27:10 compute-0 systemd[1]: Started libpod-conmon-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope.
Nov 22 05:27:10 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 22 05:27:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 22 05:27:10 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 05:27:10 compute-0 podman[103005]: 2025-11-22 05:27:10.240202124 +0000 UTC m=+0.025998148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:10 compute-0 podman[102999]: 2025-11-22 05:27:10.245957564 +0000 UTC m=+0.042051400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:10 compute-0 ceph-mon[75840]: 3.10 deep-scrub starts
Nov 22 05:27:10 compute-0 ceph-mon[75840]: 3.10 deep-scrub ok
Nov 22 05:27:10 compute-0 ceph-mon[75840]: from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 05:27:10 compute-0 ceph-mon[75840]: 5.e scrub starts
Nov 22 05:27:10 compute-0 ceph-mon[75840]: 5.e scrub ok
Nov 22 05:27:10 compute-0 ceph-mon[75840]: osdmap e51: 3 total, 3 up, 3 in
Nov 22 05:27:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 05:27:10 compute-0 podman[103005]: 2025-11-22 05:27:10.391796941 +0000 UTC m=+0.177593015 container init edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:10 compute-0 podman[102999]: 2025-11-22 05:27:10.398322779 +0000 UTC m=+0.194416695 container init 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:10 compute-0 podman[103005]: 2025-11-22 05:27:10.407163308 +0000 UTC m=+0.192959342 container start edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:27:10 compute-0 podman[102999]: 2025-11-22 05:27:10.412951498 +0000 UTC m=+0.209045374 container start 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:10 compute-0 podman[103005]: 2025-11-22 05:27:10.41342985 +0000 UTC m=+0.199225874 container attach edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:10 compute-0 podman[102999]: 2025-11-22 05:27:10.417783418 +0000 UTC m=+0.213877334 container attach 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 05:27:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225977205' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:27:10 compute-0 laughing_moser[103032]: 
Nov 22 05:27:10 compute-0 laughing_moser[103032]: {"fsid":"13fdadc6-d566-5465-9ac8-a148ef130da1","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":193,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":51,"num_osds":3,"num_up_osds":3,"osd_up_since":1763789160,"num_in_osds":3,"osd_in_since":1763789129,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":2}],"num_pgs":195,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":84307968,"bytes_avail":64327618560,"bytes_total":64411926528,"unknown_pgs_ratio":0.010256410576403141},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.dntioh","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-11-22T05:27:01.678326+0000","services":{}},"progress_events":{}}
Nov 22 05:27:11 compute-0 systemd[1]: libpod-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope: Deactivated successfully.
Nov 22 05:27:11 compute-0 conmon[103032]: conmon edff554f5fff9a1f4820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope/container/memory.events
Nov 22 05:27:11 compute-0 podman[103061]: 2025-11-22 05:27:11.076320554 +0000 UTC m=+0.041992688 container died edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e7b79eaab2d82fd699b4bc42fb698cb3ad402dfbbbe8edfd9455a91a8563a7f-merged.mount: Deactivated successfully.
Nov 22 05:27:11 compute-0 podman[103061]: 2025-11-22 05:27:11.130788442 +0000 UTC m=+0.096460556 container remove edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a (image=quay.io/ceph/ceph:v18, name=laughing_moser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:11 compute-0 systemd[1]: libpod-conmon-edff554f5fff9a1f482057573b895cf534b297f4af82d9304490a9eea7b1458a.scope: Deactivated successfully.
Nov 22 05:27:11 compute-0 sudo[102991]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:11 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 51 pg[10.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 22 05:27:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 05:27:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 22 05:27:11 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 22 05:27:11 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 52 pg[10.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [2] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:11 compute-0 ceph-mon[75840]: pgmap v114: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:11 compute-0 ceph-mon[75840]: 4.15 scrub starts
Nov 22 05:27:11 compute-0 ceph-mon[75840]: 4.15 scrub ok
Nov 22 05:27:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/225977205' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 05:27:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3277347465' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 05:27:11 compute-0 ceph-mon[75840]: osdmap e52: 3 total, 3 up, 3 in
Nov 22 05:27:11 compute-0 priceless_ardinghelli[103031]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:27:11 compute-0 priceless_ardinghelli[103031]: --> relative data size: 1.0
Nov 22 05:27:11 compute-0 priceless_ardinghelli[103031]: --> All data devices are unavailable
Nov 22 05:27:11 compute-0 systemd[1]: libpod-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Deactivated successfully.
Nov 22 05:27:11 compute-0 podman[102999]: 2025-11-22 05:27:11.540705764 +0000 UTC m=+1.336799610 container died 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:27:11 compute-0 systemd[1]: libpod-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Consumed 1.056s CPU time.
Nov 22 05:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-63c37b715ad427900cdc4d5363e484c86ae7fd056f9db59834419c5792a2456a-merged.mount: Deactivated successfully.
Nov 22 05:27:11 compute-0 podman[102999]: 2025-11-22 05:27:11.609998286 +0000 UTC m=+1.406092112 container remove 02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:27:11 compute-0 systemd[1]: libpod-conmon-02033d64e187a155362901323e6f25a2453d62899238b33e80545356d2e8e82e.scope: Deactivated successfully.
Nov 22 05:27:11 compute-0 sudo[102867]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v117: 196 pgs: 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 05:27:11 compute-0 sudo[103126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:11 compute-0 sudo[103126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:11 compute-0 sudo[103126]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:11 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 22 05:27:11 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 22 05:27:11 compute-0 sudo[103151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:11 compute-0 sudo[103151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:11 compute-0 sudo[103151]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:11 compute-0 sudo[103176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:11 compute-0 sudo[103176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:11 compute-0 sudo[103176]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:11 compute-0 sudo[103201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:27:11 compute-0 sudo[103201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:12 compute-0 sudo[103249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpiidcofijcppumzeslqfzqrhdwwvdpp ; /usr/bin/python3'
Nov 22 05:27:12 compute-0 sudo[103249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:12 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 22 05:27:12 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 22 05:27:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 22 05:27:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 22 05:27:12 compute-0 python3[103253]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 22 05:27:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.326739035 +0000 UTC m=+0.057708262 container create 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:12 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 22 05:27:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 22 05:27:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.357579281 +0000 UTC m=+0.073210812 container create e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 22 05:27:12 compute-0 systemd[1]: Started libpod-conmon-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope.
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.298067379 +0000 UTC m=+0.029036686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:12 compute-0 systemd[1]: Started libpod-conmon-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope.
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.328463185 +0000 UTC m=+0.044094746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.425815299 +0000 UTC m=+0.156784536 container init 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.4334083 +0000 UTC m=+0.164377537 container start 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.439391456 +0000 UTC m=+0.170360683 container attach 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.450119437 +0000 UTC m=+0.165750998 container init e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.460327348 +0000 UTC m=+0.175958909 container start e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:27:12 compute-0 fervent_cohen[103327]: 167 167
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.467650283 +0000 UTC m=+0.183281834 container attach e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:12 compute-0 systemd[1]: libpod-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope: Deactivated successfully.
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.468580024 +0000 UTC m=+0.184211585 container died e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c8703731dcde36df601d2cb8cf9f9a4ce574310fccf79d96a9191ad56897e9-merged.mount: Deactivated successfully.
Nov 22 05:27:12 compute-0 podman[103293]: 2025-11-22 05:27:12.526907108 +0000 UTC m=+0.242538659 container remove e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cohen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:27:12 compute-0 systemd[1]: libpod-conmon-e72a1f206fc679b6e081010ed5344c1bb9dc2506dbb539f3dd7a490bb73537da.scope: Deactivated successfully.
Nov 22 05:27:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 22 05:27:12 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 22 05:27:12 compute-0 podman[103352]: 2025-11-22 05:27:12.739854739 +0000 UTC m=+0.059734097 container create 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:27:12 compute-0 systemd[1]: Started libpod-conmon-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope.
Nov 22 05:27:12 compute-0 podman[103352]: 2025-11-22 05:27:12.714896947 +0000 UTC m=+0.034776385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:12 compute-0 podman[103352]: 2025-11-22 05:27:12.849671795 +0000 UTC m=+0.169551233 container init 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:27:12 compute-0 podman[103352]: 2025-11-22 05:27:12.863010106 +0000 UTC m=+0.182889464 container start 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:12 compute-0 podman[103352]: 2025-11-22 05:27:12.867819125 +0000 UTC m=+0.187698513 container attach 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:27:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 05:27:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021684245' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:27:12 compute-0 magical_wright[103322]: 
Nov 22 05:27:12 compute-0 magical_wright[103322]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pzxxqv","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 22 05:27:12 compute-0 systemd[1]: libpod-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope: Deactivated successfully.
Nov 22 05:27:12 compute-0 podman[103292]: 2025-11-22 05:27:12.987366639 +0000 UTC m=+0.718335846 container died 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c94510fb014bf1bb999d088c8caa321f4c3694575e51f5d94fe770eeade7209-merged.mount: Deactivated successfully.
Nov 22 05:27:13 compute-0 podman[103292]: 2025-11-22 05:27:13.037778116 +0000 UTC m=+0.768747363 container remove 22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:27:13 compute-0 systemd[1]: libpod-conmon-22009f5f5cfbe25ecc45ab889317ce0e36d4fc7d9138cd2b421eccbe1be9bdb4.scope: Deactivated successfully.
Nov 22 05:27:13 compute-0 sudo[103249]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:13 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 22 05:27:13 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 22 05:27:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 22 05:27:13 compute-0 ceph-mon[75840]: pgmap v117: 196 pgs: 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 3.13 scrub starts
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 3.13 scrub ok
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 4.16 scrub starts
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 4.16 scrub ok
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 5.10 scrub starts
Nov 22 05:27:13 compute-0 ceph-mon[75840]: 5.10 scrub ok
Nov 22 05:27:13 compute-0 ceph-mon[75840]: osdmap e53: 3 total, 3 up, 3 in
Nov 22 05:27:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 05:27:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1021684245' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 05:27:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 05:27:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 22 05:27:13 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 22 05:27:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 22 05:27:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 05:27:13 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [1] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]: {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     "0": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "devices": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "/dev/loop3"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             ],
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_name": "ceph_lv0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_size": "21470642176",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "name": "ceph_lv0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "tags": {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.crush_device_class": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.encrypted": "0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_id": "0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.vdo": "0"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             },
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "vg_name": "ceph_vg0"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         }
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     ],
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     "1": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "devices": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "/dev/loop4"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             ],
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_name": "ceph_lv1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_size": "21470642176",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "name": "ceph_lv1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "tags": {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.crush_device_class": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.encrypted": "0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_id": "1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.vdo": "0"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             },
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "vg_name": "ceph_vg1"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         }
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     ],
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     "2": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "devices": [
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "/dev/loop5"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             ],
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_name": "ceph_lv2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_size": "21470642176",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "name": "ceph_lv2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "tags": {
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.crush_device_class": "",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.encrypted": "0",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osd_id": "2",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:                 "ceph.vdo": "0"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             },
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "type": "block",
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:             "vg_name": "ceph_vg2"
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:         }
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]:     ]
Nov 22 05:27:13 compute-0 hopeful_kirch[103387]: }
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v120: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 05:27:13 compute-0 systemd[1]: libpod-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope: Deactivated successfully.
Nov 22 05:27:13 compute-0 podman[103352]: 2025-11-22 05:27:13.70550879 +0000 UTC m=+1.025388208 container died 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7497ade1a3a8cf6397a614118cf7899f14434230ca9250c4a3f9abbc8e29e38c-merged.mount: Deactivated successfully.
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:13 compute-0 podman[103352]: 2025-11-22 05:27:13.779285694 +0000 UTC m=+1.099165082 container remove 111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:27:13 compute-0 systemd[1]: libpod-conmon-111cc9f965f1dacc47d6cbebbf761c6d3bc77b862185f73267312e52ce25df5d.scope: Deactivated successfully.
Nov 22 05:27:13 compute-0 sudo[103201]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:13 compute-0 sudo[103424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:13 compute-0 sudo[103424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:13 compute-0 sudo[103424]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:13 compute-0 sudo[103471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-addpezhchmlhbskekymxkmfelrrtbhde ; /usr/bin/python3'
Nov 22 05:27:13 compute-0 sudo[103471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:13 compute-0 sudo[103473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:13 compute-0 sudo[103473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:14 compute-0 sudo[103473]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:14 compute-0 sudo[103500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:14 compute-0 sudo[103500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:14 compute-0 sudo[103500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:14 compute-0 python3[103480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:14 compute-0 sudo[103525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:27:14 compute-0 sudo[103525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.192908809 +0000 UTC m=+0.063691177 container create b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:27:14 compute-0 systemd[1]: Started libpod-conmon-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope.
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.165765577 +0000 UTC m=+0.036548035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.307137314 +0000 UTC m=+0.177919732 container init b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.319012332 +0000 UTC m=+0.189794740 container start b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.322847068 +0000 UTC m=+0.193629476 container attach b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 22 05:27:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 05:27:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 22 05:27:14 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 22 05:27:14 compute-0 ceph-mon[75840]: 3.14 scrub starts
Nov 22 05:27:14 compute-0 ceph-mon[75840]: 3.14 scrub ok
Nov 22 05:27:14 compute-0 ceph-mon[75840]: 2.c scrub starts
Nov 22 05:27:14 compute-0 ceph-mon[75840]: 2.c scrub ok
Nov 22 05:27:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 05:27:14 compute-0 ceph-mon[75840]: osdmap e54: 3 total, 3 up, 3 in
Nov 22 05:27:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 05:27:14 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-rgw-rgw-compute-0-pzxxqv[101834]: 2025-11-22T05:27:14.532+0000 7f102e508940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 22 05:27:14 compute-0 radosgw[101838]: LDAP not started since no server URIs were provided in the configuration.
Nov 22 05:27:14 compute-0 radosgw[101838]: framework: beast
Nov 22 05:27:14 compute-0 radosgw[101838]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 22 05:27:14 compute-0 radosgw[101838]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 22 05:27:14 compute-0 radosgw[101838]: starting handler: beast
Nov 22 05:27:14 compute-0 radosgw[101838]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 05:27:14 compute-0 radosgw[101838]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pzxxqv,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3481acdc-caf7-460d-ae73-20f679a0fd37,zone_name=default,zonegroup_id=4901858a-2ef2-49a3-9870-8af5774bd334,zonegroup_name=default}
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.627892806 +0000 UTC m=+0.056017004 container create 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:27:14 compute-0 systemd[1]: Started libpod-conmon-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope.
Nov 22 05:27:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.603585297 +0000 UTC m=+0.031709535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.710761014 +0000 UTC m=+0.138885272 container init 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.716444322 +0000 UTC m=+0.144568520 container start 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:14 compute-0 flamboyant_ptolemy[104188]: 167 167
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.72166905 +0000 UTC m=+0.149793298 container attach 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:14 compute-0 systemd[1]: libpod-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope: Deactivated successfully.
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.723021631 +0000 UTC m=+0.151145839 container died 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d5a5b381d3e0bde88a13d927a0cfa8c47ae1a6a748b00459b9e16f5efd1a22-merged.mount: Deactivated successfully.
Nov 22 05:27:14 compute-0 podman[103636]: 2025-11-22 05:27:14.770797488 +0000 UTC m=+0.198921686 container remove 23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:27:14 compute-0 systemd[1]: libpod-conmon-23b5791dc0c8515c4878620b66325ffdf1a2f413aa309c7330c1a72b20bd5d21.scope: Deactivated successfully.
Nov 22 05:27:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 22 05:27:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862054732' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 05:27:14 compute-0 modest_mcclintock[103565]: mimic
Nov 22 05:27:14 compute-0 sshd-session[103571]: Invalid user solana from 80.94.92.166 port 43480
Nov 22 05:27:14 compute-0 systemd[1]: libpod-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope: Deactivated successfully.
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.899783696 +0000 UTC m=+0.770566094 container died b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:27:14 compute-0 podman[104210]: 2025-11-22 05:27:14.914641301 +0000 UTC m=+0.046816847 container create fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b212bcc01f5c82974b07cbcd17657a49c48f4638430f6e5b8eae46f10d835855-merged.mount: Deactivated successfully.
Nov 22 05:27:14 compute-0 systemd[1]: Started libpod-conmon-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope.
Nov 22 05:27:14 compute-0 podman[103528]: 2025-11-22 05:27:14.959163154 +0000 UTC m=+0.829945522 container remove b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc (image=quay.io/ceph/ceph:v18, name=modest_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:14 compute-0 systemd[1]: libpod-conmon-b6ed06c32a4c1ef033ad526a81b4b9f736027a1e852258e34c703a4dfaf984bc.scope: Deactivated successfully.
Nov 22 05:27:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 sudo[103471]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:14 compute-0 podman[104210]: 2025-11-22 05:27:14.886375623 +0000 UTC m=+0.018551189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:14 compute-0 podman[104210]: 2025-11-22 05:27:14.993146571 +0000 UTC m=+0.125322137 container init fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 22 05:27:15 compute-0 podman[104210]: 2025-11-22 05:27:15.000639259 +0000 UTC m=+0.132814815 container start fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:27:15 compute-0 podman[104210]: 2025-11-22 05:27:15.005868728 +0000 UTC m=+0.138044274 container attach fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:27:15 compute-0 sshd-session[103571]: Connection closed by invalid user solana 80.94.92.166 port 43480 [preauth]
Nov 22 05:27:15 compute-0 ceph-mon[75840]: pgmap v120: 197 pgs: 1 unknown, 196 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 05:27:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3219413151' entity='client.rgw.rgw.compute-0.pzxxqv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 05:27:15 compute-0 ceph-mon[75840]: osdmap e55: 3 total, 3 up, 3 in
Nov 22 05:27:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2862054732' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 05:27:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 11 KiB/s wr, 300 op/s
Nov 22 05:27:15 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 22 05:27:15 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 22 05:27:15 compute-0 sudo[104283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqbtpicszpxkskqvfhohrumbxtotfcl ; /usr/bin/python3'
Nov 22 05:27:15 compute-0 sudo[104283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:16 compute-0 python3[104286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]: {
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_id": 1,
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "type": "bluestore"
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     },
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_id": 2,
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "type": "bluestore"
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     },
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_id": 0,
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:         "type": "bluestore"
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]:     }
Nov 22 05:27:16 compute-0 strange_visvesvaraya[104242]: }
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.064495655 +0000 UTC m=+0.043375559 container create 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:27:16 compute-0 systemd[1]: libpod-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Deactivated successfully.
Nov 22 05:27:16 compute-0 systemd[1]: libpod-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Consumed 1.090s CPU time.
Nov 22 05:27:16 compute-0 systemd[1]: Started libpod-conmon-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope.
Nov 22 05:27:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:16 compute-0 podman[104315]: 2025-11-22 05:27:16.136899767 +0000 UTC m=+0.032707819 container died fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.048585555 +0000 UTC m=+0.027465479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.153086122 +0000 UTC m=+0.131966126 container init 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-270d0f40a50bd49fd75ca8ac5a2b0b84f7149f4c72cfa15b137be46730fb983e-merged.mount: Deactivated successfully.
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.16100049 +0000 UTC m=+0.139880404 container start 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.172946709 +0000 UTC m=+0.151826633 container attach 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:27:16 compute-0 podman[104315]: 2025-11-22 05:27:16.192854379 +0000 UTC m=+0.088662431 container remove fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:16 compute-0 systemd[1]: libpod-conmon-fc3800197c59fabf5e56508095ddfda1e438359d523c8824e5e1a89b61929036.scope: Deactivated successfully.
Nov 22 05:27:16 compute-0 sudo[103525]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:16 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 689fcb5c-c553-4ae5-84ae-4e0a24698b4e does not exist
Nov 22 05:27:16 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 79b24e5b-ec5f-4dff-b617-ec7c5eac3d55 does not exist
Nov 22 05:27:16 compute-0 sudo[104334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:16 compute-0 sudo[104334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 sudo[104334]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 sudo[104359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:27:16 compute-0 sudo[104359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 sudo[104359]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 sudo[104384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:16 compute-0 sudo[104384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 sudo[104384]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 sudo[104409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:16 compute-0 sudo[104409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 sudo[104409]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 sudo[104453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:16 compute-0 sudo[104453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 sudo[104453]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:16 compute-0 sudo[104478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:27:16 compute-0 sudo[104478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 22 05:27:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/828573560' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 05:27:16 compute-0 frosty_allen[104321]: 
Nov 22 05:27:16 compute-0 frosty_allen[104321]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 22 05:27:16 compute-0 systemd[1]: libpod-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope: Deactivated successfully.
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.761859327 +0000 UTC m=+0.740739291 container died 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7731d954482048cda7aa8ea334f14f96ba29d68ed09183614a2b1c9160c890e-merged.mount: Deactivated successfully.
Nov 22 05:27:16 compute-0 podman[104298]: 2025-11-22 05:27:16.820620622 +0000 UTC m=+0.799500526 container remove 59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d (image=quay.io/ceph/ceph:v18, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:27:16 compute-0 systemd[1]: libpod-conmon-59f1eaec75c97bf6d1fc1236ac66d86d2208d50762a02778ef682c47e5ea7f6d.scope: Deactivated successfully.
Nov 22 05:27:16 compute-0 sudo[104283]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:17 compute-0 podman[104587]: 2025-11-22 05:27:17.154747745 +0000 UTC m=+0.088883865 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Nov 22 05:27:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Nov 22 05:27:17 compute-0 ceph-mon[75840]: pgmap v122: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 11 KiB/s wr, 300 op/s
Nov 22 05:27:17 compute-0 ceph-mon[75840]: 3.19 scrub starts
Nov 22 05:27:17 compute-0 ceph-mon[75840]: 3.19 scrub ok
Nov 22 05:27:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/828573560' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 05:27:17 compute-0 podman[104587]: 2025-11-22 05:27:17.279870516 +0000 UTC m=+0.214006546 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 5.7 KiB/s wr, 215 op/s
Nov 22 05:27:17 compute-0 sudo[104478]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:17 compute-0 sudo[104748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:17 compute-0 sudo[104748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:17 compute-0 sudo[104748]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 sudo[104773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:18 compute-0 sudo[104773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 sudo[104773]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 sudo[104798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:18 compute-0 sudo[104798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 sudo[104798]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 22 05:27:18 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 22 05:27:18 compute-0 sudo[104823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:27:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 22 05:27:18 compute-0 sudo[104823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 22 05:27:18 compute-0 ceph-mon[75840]: 5.17 deep-scrub starts
Nov 22 05:27:18 compute-0 ceph-mon[75840]: 5.17 deep-scrub ok
Nov 22 05:27:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:18 compute-0 sudo[104823]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:18 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c0cb22cb-47b9-412c-ac8f-5d74bf79bc78 does not exist
Nov 22 05:27:18 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8f79917e-6ef9-4011-be8a-466e9127faeb does not exist
Nov 22 05:27:18 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d11cfbe0-d78c-4ba3-9d0d-39714c350d9a does not exist
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:27:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:27:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:18 compute-0 sudo[104880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:18 compute-0 sudo[104880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 sudo[104880]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 sudo[104905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:18 compute-0 sudo[104905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 sudo[104905]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 22 05:27:18 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 22 05:27:18 compute-0 sudo[104930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:18 compute-0 sudo[104930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:18 compute-0 sudo[104930]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:18 compute-0 sudo[104955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:27:18 compute-0 sudo[104955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:19 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.135985363 +0000 UTC m=+0.035735468 container create fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:27:19 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 22 05:27:19 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 22 05:27:19 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 22 05:27:19 compute-0 systemd[1]: Started libpod-conmon-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope.
Nov 22 05:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.20817279 +0000 UTC m=+0.107922895 container init fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.213690054 +0000 UTC m=+0.113440159 container start fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.119737896 +0000 UTC m=+0.019488051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:19 compute-0 intelligent_shamir[105034]: 167 167
Nov 22 05:27:19 compute-0 systemd[1]: libpod-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope: Deactivated successfully.
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.221344597 +0000 UTC m=+0.121094782 container attach fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.221974331 +0000 UTC m=+0.121724466 container died fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-feccd5690fa1d76c01dde8e3328b5ea2772a5a108d57ddc06298daefe9cb74d8-merged.mount: Deactivated successfully.
Nov 22 05:27:19 compute-0 ceph-mon[75840]: pgmap v123: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 5.7 KiB/s wr, 215 op/s
Nov 22 05:27:19 compute-0 ceph-mon[75840]: 4.17 scrub starts
Nov 22 05:27:19 compute-0 ceph-mon[75840]: 4.17 scrub ok
Nov 22 05:27:19 compute-0 ceph-mon[75840]: 2.e scrub starts
Nov 22 05:27:19 compute-0 ceph-mon[75840]: 2.e scrub ok
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:27:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:27:19 compute-0 podman[105019]: 2025-11-22 05:27:19.272086751 +0000 UTC m=+0.171836856 container remove fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shamir, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:27:19 compute-0 systemd[1]: libpod-conmon-fe0d131f38cdbc7b8bf546adc2f2ca526a332a8de5f31a4765593aaf3b513d19.scope: Deactivated successfully.
Nov 22 05:27:19 compute-0 podman[105061]: 2025-11-22 05:27:19.411608826 +0000 UTC m=+0.042259664 container create ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:27:19 compute-0 systemd[1]: Started libpod-conmon-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope.
Nov 22 05:27:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:19 compute-0 podman[105061]: 2025-11-22 05:27:19.485355499 +0000 UTC m=+0.116006347 container init ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:27:19 compute-0 podman[105061]: 2025-11-22 05:27:19.390451199 +0000 UTC m=+0.021102057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:19 compute-0 podman[105061]: 2025-11-22 05:27:19.490496975 +0000 UTC m=+0.121147823 container start ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:27:19 compute-0 podman[105061]: 2025-11-22 05:27:19.494120426 +0000 UTC m=+0.124771274 container attach ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:27:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 175 op/s
Nov 22 05:27:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 22 05:27:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 3.1a deep-scrub starts
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 3.1a deep-scrub ok
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 4.19 scrub starts
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 4.19 scrub ok
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 5.1b scrub starts
Nov 22 05:27:20 compute-0 ceph-mon[75840]: 5.1b scrub ok
Nov 22 05:27:20 compute-0 affectionate_joliot[105077]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:27:20 compute-0 affectionate_joliot[105077]: --> relative data size: 1.0
Nov 22 05:27:20 compute-0 affectionate_joliot[105077]: --> All data devices are unavailable
Nov 22 05:27:20 compute-0 systemd[1]: libpod-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope: Deactivated successfully.
Nov 22 05:27:20 compute-0 podman[105061]: 2025-11-22 05:27:20.481046857 +0000 UTC m=+1.111697705 container died ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a376deafa9867d83a6cbe78591de3fa57d7298cb1414e8d679534801fd7f73d-merged.mount: Deactivated successfully.
Nov 22 05:27:20 compute-0 podman[105061]: 2025-11-22 05:27:20.553717205 +0000 UTC m=+1.184368053 container remove ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:27:20 compute-0 systemd[1]: libpod-conmon-ccf9c83efeed1c02cdd0798e5c15dfb87246af7bee68fdaadbdcbe664d261286.scope: Deactivated successfully.
Nov 22 05:27:20 compute-0 sudo[104955]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:20 compute-0 sudo[105120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:20 compute-0 sudo[105120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:20 compute-0 sudo[105120]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:20 compute-0 sudo[105145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:20 compute-0 sudo[105145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:20 compute-0 sudo[105145]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:20 compute-0 sudo[105170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:20 compute-0 sudo[105170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:20 compute-0 sudo[105170]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:20 compute-0 sudo[105195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:27:20 compute-0 sudo[105195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.171036703 +0000 UTC m=+0.048695919 container create 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:27:21 compute-0 systemd[1]: Started libpod-conmon-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope.
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.151704997 +0000 UTC m=+0.029364213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.268145192 +0000 UTC m=+0.145804398 container init 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.276134632 +0000 UTC m=+0.153793858 container start 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:21 compute-0 happy_leavitt[105277]: 167 167
Nov 22 05:27:21 compute-0 systemd[1]: libpod-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope: Deactivated successfully.
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.283488628 +0000 UTC m=+0.161147854 container attach 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.284417119 +0000 UTC m=+0.162076305 container died 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:27:21 compute-0 ceph-mon[75840]: pgmap v124: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 175 op/s
Nov 22 05:27:21 compute-0 ceph-mon[75840]: 3.1c scrub starts
Nov 22 05:27:21 compute-0 ceph-mon[75840]: 3.1c scrub ok
Nov 22 05:27:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c9e6d19a5c97651394a5561e90020bc9295f79a31a0aac6076ac8f5c9bcab51-merged.mount: Deactivated successfully.
Nov 22 05:27:21 compute-0 podman[105261]: 2025-11-22 05:27:21.328227497 +0000 UTC m=+0.205886663 container remove 715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:27:21 compute-0 systemd[1]: libpod-conmon-715b0813879fd54c1723fcdf6ee84311c2d857d44e70d6c54094b91c27bb67ae.scope: Deactivated successfully.
Nov 22 05:27:21 compute-0 podman[105301]: 2025-11-22 05:27:21.543687214 +0000 UTC m=+0.069327973 container create cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:21 compute-0 systemd[1]: Started libpod-conmon-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope.
Nov 22 05:27:21 compute-0 podman[105301]: 2025-11-22 05:27:21.51642272 +0000 UTC m=+0.042063529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:21 compute-0 podman[105301]: 2025-11-22 05:27:21.633343126 +0000 UTC m=+0.158983935 container init cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:27:21 compute-0 podman[105301]: 2025-11-22 05:27:21.641856908 +0000 UTC m=+0.167497617 container start cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:21 compute-0 podman[105301]: 2025-11-22 05:27:21.645340707 +0000 UTC m=+0.170981516 container attach cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:27:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 190 op/s
Nov 22 05:27:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:22 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1d deep-scrub starts
Nov 22 05:27:22 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1d deep-scrub ok
Nov 22 05:27:22 compute-0 reverent_solomon[105318]: {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     "0": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "devices": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "/dev/loop3"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             ],
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_name": "ceph_lv0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_size": "21470642176",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "name": "ceph_lv0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "tags": {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.crush_device_class": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.encrypted": "0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_id": "0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.vdo": "0"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             },
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "vg_name": "ceph_vg0"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         }
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     ],
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     "1": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "devices": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "/dev/loop4"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             ],
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_name": "ceph_lv1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_size": "21470642176",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "name": "ceph_lv1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "tags": {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.crush_device_class": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.encrypted": "0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_id": "1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.vdo": "0"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             },
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "vg_name": "ceph_vg1"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         }
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     ],
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     "2": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "devices": [
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "/dev/loop5"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             ],
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_name": "ceph_lv2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_size": "21470642176",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "name": "ceph_lv2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "tags": {
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.cluster_name": "ceph",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.crush_device_class": "",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.encrypted": "0",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osd_id": "2",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:                 "ceph.vdo": "0"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             },
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "type": "block",
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:             "vg_name": "ceph_vg2"
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:         }
Nov 22 05:27:22 compute-0 reverent_solomon[105318]:     ]
Nov 22 05:27:22 compute-0 reverent_solomon[105318]: }
Nov 22 05:27:22 compute-0 systemd[1]: libpod-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope: Deactivated successfully.
Nov 22 05:27:22 compute-0 podman[105301]: 2025-11-22 05:27:22.452046194 +0000 UTC m=+0.977686933 container died cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:27:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-0043c62c78cf9c5cb8e4c617a604d1b26a7bdbf80d027cc7b8b32f04f20d39bf-merged.mount: Deactivated successfully.
Nov 22 05:27:22 compute-0 podman[105301]: 2025-11-22 05:27:22.529219794 +0000 UTC m=+1.054860523 container remove cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:27:22 compute-0 systemd[1]: libpod-conmon-cec7f36d707d2815b0cd3dbf189be6f790d58a7f4e1d57e8cbe136ad6c7c8eb0.scope: Deactivated successfully.
Nov 22 05:27:22 compute-0 sudo[105195]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:22 compute-0 sudo[105340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:22 compute-0 sudo[105340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:22 compute-0 sudo[105340]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:22 compute-0 sudo[105365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:27:22 compute-0 sudo[105365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:22 compute-0 sudo[105365]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:22 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 22 05:27:22 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 22 05:27:22 compute-0 sudo[105390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:22 compute-0 sudo[105390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:22 compute-0 sudo[105390]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:22 compute-0 sudo[105415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:27:22 compute-0 sudo[105415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:23 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 22 05:27:23 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 22 05:27:23 compute-0 ceph-mon[75840]: pgmap v125: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.1 KiB/s wr, 190 op/s
Nov 22 05:27:23 compute-0 ceph-mon[75840]: 4.1d deep-scrub starts
Nov 22 05:27:23 compute-0 ceph-mon[75840]: 4.1d deep-scrub ok
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.312863851 +0000 UTC m=+0.044996045 container create 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:23 compute-0 systemd[1]: Started libpod-conmon-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope.
Nov 22 05:27:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.29329208 +0000 UTC m=+0.025424314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.397275954 +0000 UTC m=+0.129408238 container init 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.409526371 +0000 UTC m=+0.141658565 container start 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:27:23 compute-0 kind_sutherland[105494]: 167 167
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.413234264 +0000 UTC m=+0.145366558 container attach 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:27:23 compute-0 systemd[1]: libpod-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope: Deactivated successfully.
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.413855808 +0000 UTC m=+0.145988012 container died 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:27:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e691f398e26146b65ed9a7a134ae70bd6078277235fd48ff9f27861e14e6923c-merged.mount: Deactivated successfully.
Nov 22 05:27:23 compute-0 podman[105478]: 2025-11-22 05:27:23.458700929 +0000 UTC m=+0.190833153 container remove 2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:27:23 compute-0 systemd[1]: libpod-conmon-2c725ea93c6b3195801464da9097a858465e8451992cfd2fbb3a26b1f9817d48.scope: Deactivated successfully.
Nov 22 05:27:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 158 op/s
Nov 22 05:27:23 compute-0 podman[105517]: 2025-11-22 05:27:23.696397398 +0000 UTC m=+0.059246057 container create 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:27:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 22 05:27:23 compute-0 systemd[1]: Started libpod-conmon-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope.
Nov 22 05:27:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 22 05:27:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:23 compute-0 podman[105517]: 2025-11-22 05:27:23.674070615 +0000 UTC m=+0.036919304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:23 compute-0 podman[105517]: 2025-11-22 05:27:23.78119095 +0000 UTC m=+0.144039619 container init 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:23 compute-0 podman[105517]: 2025-11-22 05:27:23.794456069 +0000 UTC m=+0.157304718 container start 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:27:23 compute-0 podman[105517]: 2025-11-22 05:27:23.798544261 +0000 UTC m=+0.161393000 container attach 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:27:24 compute-0 ceph-mon[75840]: 7.7 scrub starts
Nov 22 05:27:24 compute-0 ceph-mon[75840]: 7.7 scrub ok
Nov 22 05:27:24 compute-0 ceph-mon[75840]: 4.1e scrub starts
Nov 22 05:27:24 compute-0 ceph-mon[75840]: 4.1e scrub ok
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]: {
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_id": 1,
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "type": "bluestore"
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     },
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_id": 2,
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "type": "bluestore"
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     },
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_id": 0,
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:         "type": "bluestore"
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]:     }
Nov 22 05:27:24 compute-0 sleepy_satoshi[105534]: }
Nov 22 05:27:24 compute-0 systemd[1]: libpod-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Deactivated successfully.
Nov 22 05:27:24 compute-0 podman[105517]: 2025-11-22 05:27:24.894007369 +0000 UTC m=+1.256856038 container died 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:27:24 compute-0 systemd[1]: libpod-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Consumed 1.107s CPU time.
Nov 22 05:27:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-45266e6ad804c9ef2e52b230005afbb3708fb319b7b8193574309879cf1303c5-merged.mount: Deactivated successfully.
Nov 22 05:27:24 compute-0 podman[105517]: 2025-11-22 05:27:24.961593882 +0000 UTC m=+1.324442561 container remove 2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_satoshi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:24 compute-0 systemd[1]: libpod-conmon-2affdbe612eafcd3b275089cd7200162473d079bdc9d02ef2584b120931080e0.scope: Deactivated successfully.
Nov 22 05:27:25 compute-0 sudo[105415]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:27:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:27:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5d1eda3a-68d6-4172-b1bc-7e9176ddc661 does not exist
Nov 22 05:27:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3e2a0a41-7236-4adb-8c8a-429a6fff16ea does not exist
Nov 22 05:27:25 compute-0 sudo[105579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:27:25 compute-0 sudo[105579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:25 compute-0 sudo[105579]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:25 compute-0 sudo[105604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:27:25 compute-0 sudo[105604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:27:25 compute-0 sudo[105604]: pam_unix(sudo:session): session closed for user root
Nov 22 05:27:25 compute-0 ceph-mon[75840]: pgmap v126: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 3.4 KiB/s wr, 158 op/s
Nov 22 05:27:25 compute-0 ceph-mon[75840]: 7.b scrub starts
Nov 22 05:27:25 compute-0 ceph-mon[75840]: 7.b scrub ok
Nov 22 05:27:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3 KiB/s wr, 139 op/s
Nov 22 05:27:26 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 22 05:27:26 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 22 05:27:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:27 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 22 05:27:27 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 22 05:27:27 compute-0 ceph-mon[75840]: pgmap v127: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3 KiB/s wr, 139 op/s
Nov 22 05:27:27 compute-0 ceph-mon[75840]: 4.1f scrub starts
Nov 22 05:27:27 compute-0 ceph-mon[75840]: 4.1f scrub ok
Nov 22 05:27:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 22 05:27:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 22 05:27:28 compute-0 ceph-mon[75840]: 5.1c scrub starts
Nov 22 05:27:28 compute-0 ceph-mon[75840]: 5.1c scrub ok
Nov 22 05:27:28 compute-0 ceph-mon[75840]: 6.3 scrub starts
Nov 22 05:27:28 compute-0 ceph-mon[75840]: 6.3 scrub ok
Nov 22 05:27:29 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 22 05:27:29 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 22 05:27:29 compute-0 ceph-mon[75840]: pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:30 compute-0 ceph-mon[75840]: 2.10 scrub starts
Nov 22 05:27:30 compute-0 ceph-mon[75840]: 2.10 scrub ok
Nov 22 05:27:30 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 22 05:27:30 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 22 05:27:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 22 05:27:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 22 05:27:31 compute-0 ceph-mon[75840]: pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 22 05:27:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 22 05:27:32 compute-0 ceph-mon[75840]: 7.d scrub starts
Nov 22 05:27:32 compute-0 ceph-mon[75840]: 7.d scrub ok
Nov 22 05:27:32 compute-0 ceph-mon[75840]: 6.5 scrub starts
Nov 22 05:27:32 compute-0 ceph-mon[75840]: 6.5 scrub ok
Nov 22 05:27:32 compute-0 ceph-mon[75840]: pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 05:27:32 compute-0 ceph-mon[75840]: 6.7 scrub starts
Nov 22 05:27:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 22 05:27:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 22 05:27:33 compute-0 ceph-mon[75840]: 6.7 scrub ok
Nov 22 05:27:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:33 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 22 05:27:33 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 22 05:27:34 compute-0 ceph-mon[75840]: 7.10 scrub starts
Nov 22 05:27:34 compute-0 ceph-mon[75840]: 7.10 scrub ok
Nov 22 05:27:34 compute-0 ceph-mon[75840]: pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:34 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 22 05:27:35 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 22 05:27:35 compute-0 ceph-mon[75840]: 7.12 scrub starts
Nov 22 05:27:35 compute-0 ceph-mon[75840]: 7.12 scrub ok
Nov 22 05:27:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:35 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Nov 22 05:27:36 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Nov 22 05:27:36 compute-0 ceph-mon[75840]: 2.12 scrub starts
Nov 22 05:27:36 compute-0 ceph-mon[75840]: 2.12 scrub ok
Nov 22 05:27:36 compute-0 ceph-mon[75840]: pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:36 compute-0 ceph-mon[75840]: 6.9 deep-scrub starts
Nov 22 05:27:36 compute-0 ceph-mon[75840]: 6.9 deep-scrub ok
Nov 22 05:27:36 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 22 05:27:36 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 22 05:27:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:37 compute-0 ceph-mon[75840]: 6.a scrub starts
Nov 22 05:27:37 compute-0 ceph-mon[75840]: 6.a scrub ok
Nov 22 05:27:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 22 05:27:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 22 05:27:38 compute-0 ceph-mon[75840]: pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:38 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Nov 22 05:27:38 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Nov 22 05:27:39 compute-0 ceph-mon[75840]: 5.1f scrub starts
Nov 22 05:27:39 compute-0 ceph-mon[75840]: 5.1f scrub ok
Nov 22 05:27:39 compute-0 ceph-mon[75840]: 6.10 scrub starts
Nov 22 05:27:39 compute-0 ceph-mon[75840]: 6.10 scrub ok
Nov 22 05:27:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 22 05:27:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 22 05:27:39 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Nov 22 05:27:39 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Nov 22 05:27:40 compute-0 ceph-mon[75840]: pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:40 compute-0 ceph-mon[75840]: 6.12 scrub starts
Nov 22 05:27:40 compute-0 ceph-mon[75840]: 6.12 scrub ok
Nov 22 05:27:40 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Nov 22 05:27:40 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Nov 22 05:27:41 compute-0 ceph-mon[75840]: 7.14 scrub starts
Nov 22 05:27:41 compute-0 ceph-mon[75840]: 7.14 scrub ok
Nov 22 05:27:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 22 05:27:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 22 05:27:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:42 compute-0 ceph-mon[75840]: 7.16 deep-scrub starts
Nov 22 05:27:42 compute-0 ceph-mon[75840]: 7.16 deep-scrub ok
Nov 22 05:27:42 compute-0 ceph-mon[75840]: pgmap v135: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 22 05:27:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 22 05:27:43 compute-0 ceph-mon[75840]: 7.17 scrub starts
Nov 22 05:27:43 compute-0 ceph-mon[75840]: 7.17 scrub ok
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:27:43
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control']
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:27:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:27:44 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Nov 22 05:27:44 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Nov 22 05:27:44 compute-0 ceph-mon[75840]: 2.14 scrub starts
Nov 22 05:27:44 compute-0 ceph-mon[75840]: 2.14 scrub ok
Nov 22 05:27:44 compute-0 ceph-mon[75840]: pgmap v136: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:44 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 22 05:27:44 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 22 05:27:45 compute-0 ceph-mon[75840]: 2.1a deep-scrub starts
Nov 22 05:27:45 compute-0 ceph-mon[75840]: 2.1a deep-scrub ok
Nov 22 05:27:45 compute-0 ceph-mon[75840]: 7.19 scrub starts
Nov 22 05:27:45 compute-0 ceph-mon[75840]: 7.19 scrub ok
Nov 22 05:27:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:45 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Nov 22 05:27:45 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Nov 22 05:27:46 compute-0 ceph-mon[75840]: pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:46 compute-0 ceph-mon[75840]: 6.16 scrub starts
Nov 22 05:27:46 compute-0 ceph-mon[75840]: 6.16 scrub ok
Nov 22 05:27:46 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 22 05:27:46 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 22 05:27:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.18 deep-scrub starts
Nov 22 05:27:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.18 deep-scrub ok
Nov 22 05:27:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:47 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 22 05:27:47 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 22 05:27:47 compute-0 ceph-mon[75840]: 7.1d scrub starts
Nov 22 05:27:47 compute-0 ceph-mon[75840]: 7.1d scrub ok
Nov 22 05:27:47 compute-0 ceph-mon[75840]: 6.18 deep-scrub starts
Nov 22 05:27:47 compute-0 ceph-mon[75840]: 6.18 deep-scrub ok
Nov 22 05:27:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:47 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 22 05:27:47 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 22 05:27:48 compute-0 ceph-mon[75840]: 2.1e deep-scrub starts
Nov 22 05:27:48 compute-0 ceph-mon[75840]: 2.1e deep-scrub ok
Nov 22 05:27:48 compute-0 ceph-mon[75840]: pgmap v138: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:48 compute-0 ceph-mon[75840]: 7.1e scrub starts
Nov 22 05:27:48 compute-0 ceph-mon[75840]: 7.1e scrub ok
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 05:27:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:27:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 22 05:27:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 22 05:27:49 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 05:27:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:27:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Nov 22 05:27:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Nov 22 05:27:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:27:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:49 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 22 05:27:49 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 22 05:27:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 22 05:27:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 22 05:27:50 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 22 05:27:50 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 05:27:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:27:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:50 compute-0 ceph-mon[75840]: osdmap e56: 3 total, 3 up, 3 in
Nov 22 05:27:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:50 compute-0 ceph-mon[75840]: 6.1e scrub starts
Nov 22 05:27:50 compute-0 ceph-mon[75840]: 6.1e scrub ok
Nov 22 05:27:50 compute-0 ceph-mon[75840]: pgmap v140: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:50 compute-0 ceph-mon[75840]: 6.19 scrub starts
Nov 22 05:27:50 compute-0 ceph-mon[75840]: 6.19 scrub ok
Nov 22 05:27:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Nov 22 05:27:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.799673080s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 135.846664429s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 57 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=11.799673080s) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 135.846664429s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 22 05:27:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 22 05:27:51 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 22 05:27:51 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 05:27:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 05:27:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:51 compute-0 ceph-mon[75840]: osdmap e57: 3 total, 3 up, 3 in
Nov 22 05:27:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:51 compute-0 ceph-mon[75840]: 6.1a scrub starts
Nov 22 05:27:51 compute-0 ceph-mon[75840]: 6.1a scrub ok
Nov 22 05:27:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:51 compute-0 ceph-mon[75840]: osdmap e58: 3 total, 3 up, 3 in
Nov 22 05:27:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 58 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [1] r=0 lpr=57 pi=[47,57)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:27:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:27:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:51 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Nov 22 05:27:51 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Nov 22 05:27:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 22 05:27:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 22 05:27:52 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] update: starting ev 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 25e9affc-cc76-49cd-a329-2a7fae9b31ce (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event a76e7016-1218-4220-835a-364659c71a5d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 077c803c-8c32-43a6-ac82-e53137e4cb61 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] complete: finished ev 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 05:27:52 compute-0 ceph-mgr[76134]: [progress INFO root] Completed event 45010bcc-ee36-43a4-a508-7f028622ea8d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 22 05:27:52 compute-0 ceph-mon[75840]: pgmap v143: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:52 compute-0 ceph-mon[75840]: 6.1b scrub starts
Nov 22 05:27:52 compute-0 ceph-mon[75840]: 6.1b scrub ok
Nov 22 05:27:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:52 compute-0 ceph-mon[75840]: osdmap e59: 3 total, 3 up, 3 in
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 59 pg[9.0( v 55'578 (0'0,55'578] local-lis/les=49/50 n=209 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.972432137s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 55'577 active pruub 137.870498657s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=51/52 n=8 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=13.982721329s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 52'15 active pruub 133.779541016s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 59 pg[10.0( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=13.982721329s) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 unknown pruub 133.779541016s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 59 pg[9.0( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.972432137s) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 0'0 unknown pruub 137.870498657s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 22 05:27:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 22 05:27:53 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.15( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.14( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.17( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.16( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.11( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.3( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.2( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 lc 0'0 (0'0,52'16] local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.c( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.d( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.b( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.f( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.9( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.a( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.e( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.8( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.6( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.7( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.4( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1a( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.5( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.18( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.19( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1e( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1f( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1c( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1d( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.12( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.13( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1b( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.10( v 55'578 lc 0'0 (0'0,55'578] local-lis/les=49/50 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.14( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.18( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1d( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.5( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.0( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 52'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.c( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.9( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.14( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.15( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.3( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 60 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [2] r=0 lpr=59 pi=[51,59)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.0( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 55'577 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.2( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.a( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.4( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1a( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.12( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.10( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 60 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [1] r=0 lpr=59 pi=[49,59)/1 crt=55'578 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v146: 290 pgs: 93 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 05:27:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:53 compute-0 ceph-mgr[76134]: [progress INFO root] Writing back 16 completed events
Nov 22 05:27:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 05:27:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 22 05:27:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 22 05:27:54 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 22 05:27:54 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.728989601s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 141.913238525s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:27:54 compute-0 ceph-mon[75840]: osdmap e60: 3 total, 3 up, 3 in
Nov 22 05:27:54 compute-0 ceph-mon[75840]: pgmap v146: 290 pgs: 93 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 05:27:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:27:54 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=14.728989601s) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown pruub 141.913238525s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 22 05:27:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 05:27:55 compute-0 ceph-mon[75840]: osdmap e61: 3 total, 3 up, 3 in
Nov 22 05:27:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 22 05:27:55 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.0( empty local-lis/les=61/62 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [1] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:27:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:56 compute-0 ceph-mon[75840]: osdmap e62: 3 total, 3 up, 3 in
Nov 22 05:27:56 compute-0 ceph-mon[75840]: pgmap v149: 321 pgs: 1 peering, 62 unknown, 258 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:56 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 22 05:27:56 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 22 05:27:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:27:57 compute-0 ceph-mon[75840]: 4.18 scrub starts
Nov 22 05:27:57 compute-0 ceph-mon[75840]: 4.18 scrub ok
Nov 22 05:27:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v150: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 22 05:27:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 22 05:27:58 compute-0 ceph-mon[75840]: pgmap v150: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:58 compute-0 ceph-mon[75840]: 5.1e scrub starts
Nov 22 05:27:58 compute-0 ceph-mon[75840]: 5.1e scrub ok
Nov 22 05:27:59 compute-0 sudo[105652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btgsttfzkytehiyhsuhkuihpdgmuxgqn ; /usr/bin/python3'
Nov 22 05:27:59 compute-0 sudo[105652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:27:59 compute-0 python3[105654]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.595335232 +0000 UTC m=+0.042113491 container create 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 22 05:27:59 compute-0 systemd[1]: Started libpod-conmon-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope.
Nov 22 05:27:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 22 05:27:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.577121463 +0000 UTC m=+0.023899762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.681047702 +0000 UTC m=+0.127826011 container init 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.689270072 +0000 UTC m=+0.136048331 container start 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.693561838 +0000 UTC m=+0.140340097 container attach 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:27:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:27:59 compute-0 sharp_raman[105670]: could not fetch user info: no user info saved
Nov 22 05:27:59 compute-0 systemd[1]: libpod-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope: Deactivated successfully.
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.889860895 +0000 UTC m=+0.336639154 container died 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:27:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c73955b284eac58526a2857673ff5a593ab9a6f65d132cfb340cf5d02d70f0b4-merged.mount: Deactivated successfully.
Nov 22 05:27:59 compute-0 podman[105655]: 2025-11-22 05:27:59.936228139 +0000 UTC m=+0.383006398 container remove 0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92 (image=quay.io/ceph/ceph:v18, name=sharp_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:27:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 22 05:27:59 compute-0 systemd[1]: libpod-conmon-0023a66115acbdf0f387b9018bbb3d6fac186918b9390c7d82e74aa396476c92.scope: Deactivated successfully.
Nov 22 05:27:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 22 05:27:59 compute-0 sudo[105652]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:00 compute-0 sudo[105789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncwxioklxdheudxtfnrjzjjtejbjlut ; /usr/bin/python3'
Nov 22 05:28:00 compute-0 sudo[105789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:00 compute-0 python3[105791]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.320045428 +0000 UTC m=+0.055012058 container create 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:00 compute-0 systemd[1]: Started libpod-conmon-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope.
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.291336247 +0000 UTC m=+0.026302967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 05:28:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.409641992 +0000 UTC m=+0.144608692 container init 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.416757223 +0000 UTC m=+0.151723853 container start 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.420952166 +0000 UTC m=+0.155918886 container attach 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:28:00 compute-0 zen_gates[105808]: {
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "user_id": "openstack",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "display_name": "openstack",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "email": "",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "suspended": 0,
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "max_buckets": 1000,
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "subusers": [],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "keys": [
Nov 22 05:28:00 compute-0 zen_gates[105808]:         {
Nov 22 05:28:00 compute-0 zen_gates[105808]:             "user": "openstack",
Nov 22 05:28:00 compute-0 zen_gates[105808]:             "access_key": "45SSD5RELPBSE4WQZF6L",
Nov 22 05:28:00 compute-0 zen_gates[105808]:             "secret_key": "VDGYg3Z5z1Bs0wbo3i7bLtzN8JHYmT4RogDbSS31"
Nov 22 05:28:00 compute-0 zen_gates[105808]:         }
Nov 22 05:28:00 compute-0 zen_gates[105808]:     ],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "swift_keys": [],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "caps": [],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "op_mask": "read, write, delete",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "default_placement": "",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "default_storage_class": "",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "placement_tags": [],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "bucket_quota": {
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "enabled": false,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "check_on_raw": false,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_size": -1,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_size_kb": 0,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_objects": -1
Nov 22 05:28:00 compute-0 zen_gates[105808]:     },
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "user_quota": {
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "enabled": false,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "check_on_raw": false,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_size": -1,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_size_kb": 0,
Nov 22 05:28:00 compute-0 zen_gates[105808]:         "max_objects": -1
Nov 22 05:28:00 compute-0 zen_gates[105808]:     },
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "temp_url_keys": [],
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "type": "rgw",
Nov 22 05:28:00 compute-0 zen_gates[105808]:     "mfa_ids": []
Nov 22 05:28:00 compute-0 zen_gates[105808]: }
Nov 22 05:28:00 compute-0 zen_gates[105808]: 
Nov 22 05:28:00 compute-0 systemd[1]: libpod-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope: Deactivated successfully.
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.64959847 +0000 UTC m=+0.384565100 container died 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1c81fe387491afe299dff25fb7686cd4b3e49d22928ac0346f818d682df4fa-merged.mount: Deactivated successfully.
Nov 22 05:28:00 compute-0 podman[105792]: 2025-11-22 05:28:00.686616504 +0000 UTC m=+0.421583134 container remove 079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6 (image=quay.io/ceph/ceph:v18, name=zen_gates, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:28:00 compute-0 systemd[1]: libpod-conmon-079b6f1f6792508ce84db8ebfa14bd9ca5d34e007abcfb08c2b32f76c82fecf6.scope: Deactivated successfully.
Nov 22 05:28:00 compute-0 sudo[105789]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:00 compute-0 ceph-mon[75840]: 4.d scrub starts
Nov 22 05:28:00 compute-0 ceph-mon[75840]: 4.d scrub ok
Nov 22 05:28:00 compute-0 ceph-mon[75840]: pgmap v151: 321 pgs: 1 peering, 31 unknown, 289 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:00 compute-0 ceph-mon[75840]: 2.19 scrub starts
Nov 22 05:28:00 compute-0 ceph-mon[75840]: 2.19 scrub ok
Nov 22 05:28:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 22 05:28:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 22 05:28:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 0 op/s
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 22 05:28:01 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.859148026s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.079818726s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864239693s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.084945679s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864306450s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085037231s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.858970642s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.079696655s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.859076500s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.079818726s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864220619s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085037231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.d( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864105225s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.084945679s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864074707s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085067749s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864051819s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085067749s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864281654s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085372925s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864239693s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085372925s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864126205s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085403442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864109039s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085357666s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864109993s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085403442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864319801s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085678101s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864022255s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085357666s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864291191s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085678101s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864405632s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085906982s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.858904839s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.079696655s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864633560s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086242676s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864316940s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086013794s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864212036s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085922241s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864229202s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086013794s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864137650s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085922241s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864124298s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.085922241s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864116669s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085906982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863928795s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.085922241s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863899231s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086059570s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.9( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863842010s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086059570s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863701820s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086029053s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.864584923s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086242676s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863669395s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086029053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863524437s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086105347s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863456726s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086090088s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.14( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863484383s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086105347s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=59/60 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863412857s) [1] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086090088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863309860s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086227417s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863193512s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active pruub 144.086135864s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863249779s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086227417s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863136292s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.086135864s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863255501s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.086120605s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.863077164s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 62'22 active pruub 144.085968018s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.15( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.862845421s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.086120605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[10.e( v 62'23 (0'0,62'23] local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.862667084s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=62'23 lcod 62'22 mlcod 0'0 unknown NOTIFY pruub 144.085968018s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-mon[75840]: 3.18 scrub starts
Nov 22 05:28:01 compute-0 ceph-mon[75840]: 3.18 scrub ok
Nov 22 05:28:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.13( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.10( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.9( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.8( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.11( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.15( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.1a( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.19( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.4( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.6( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.7( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.2( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.17( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.b( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.d( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.1e( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.16( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[10.1( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.f( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.12( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[10.14( empty local-lis/les=0/0 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.873176575s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.214599609s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845706940s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.187194824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.873139381s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.214599609s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845680237s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.187194824s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878857613s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220581055s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845470428s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.187225342s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878817558s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220581055s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.845444679s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.187225342s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878869057s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220687866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818603516s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160446167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878838539s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220687866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818561554s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160446167s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818313599s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160354614s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852367401s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194427490s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852294922s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194427490s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878567696s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220718384s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.818254471s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160354614s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878544807s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220718384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878473282s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220748901s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878455162s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220748901s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852027893s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194442749s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851988792s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194442749s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878264427s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220825195s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817679405s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160263062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817655563s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160263062s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878222466s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220825195s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852129936s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194763184s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.852100372s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194763184s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878142357s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220886230s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817591667s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160369873s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817516327s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160293579s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878114700s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220886230s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817567825s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160369873s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817477226s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160293579s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877993584s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.220825195s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.17( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877963066s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.220825195s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817594528s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160339355s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851891518s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194824219s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817395210s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160339355s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.878018379s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221008301s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851870537s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194824219s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877997398s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221008301s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817250252s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160293579s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.15( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.817233086s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160293579s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877840042s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 active pruub 144.220977783s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851682663s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194824219s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.9( v 62'1 (0'0,62'1] local-lis/les=61/62 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877781868s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=62'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.220977783s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.2( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851609230s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194824219s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851533890s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194778442s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851512909s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194778442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.15( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816669464s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160003662s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877687454s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221023560s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877670288s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221023560s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816639900s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160003662s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816723824s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160171509s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816693306s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160171509s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816498756s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160018921s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877540588s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221084595s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877521515s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221084595s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851271629s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194961548s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.851255417s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194961548s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877268791s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221054077s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877254486s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221054077s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816029549s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159942627s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816016197s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159942627s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877111435s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221145630s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.877096176s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850822449s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.194885254s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850746155s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.194885254s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815528870s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159805298s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850737572s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195037842s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=57/58 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815503120s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159805298s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876886368s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221221924s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850709915s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195037842s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876855850s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815890312s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160385132s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.d( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.14( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815864563s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160385132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876600266s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221237183s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850482941s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195144653s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.850457191s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195144653s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876572609s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221237183s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876490593s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221221924s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876454353s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.814990044s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159805298s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.814963341s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159805298s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815032959s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.160049438s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.10( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.815011978s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160049438s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.2( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.9( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876410484s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221252441s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876093864s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221313477s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876065254s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221252441s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.876073837s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221313477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813920021s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159286499s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849807739s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195175171s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813898087s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159286499s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875850677s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221328735s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849752426s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195175171s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875827789s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221328735s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849766731s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195312500s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.849741936s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.816480637s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.160018921s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875667572s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221343994s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875649452s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221343994s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875610352s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221359253s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.8( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812859535s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159240723s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.875589371s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221359253s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812825203s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159240723s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.807912827s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.154525757s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874752045s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221389771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.807893753s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.154525757s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874724388s) [0] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221389771s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813027382s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159835815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848503113s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195327759s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.813008308s) [0] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159835815s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848477364s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195327759s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874420166s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active pruub 144.221359253s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63 pruub=9.874378204s) [2] r=-1 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 144.221359253s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848275185s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 150.195404053s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63 pruub=15.848199844s) [0] r=-1 lpr=63 pi=[59,63)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.195404053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812411308s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 148.159774780s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 63 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=57/58 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63 pruub=13.812382698s) [2] r=-1 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.159774780s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.3( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.4( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.18( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.1f( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.11( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.6( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.1c( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.11( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.6( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 63 pg[8.12( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.18( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1d( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.9( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.10( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[11.19( empty local-lis/les=0/0 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[8.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 63 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 22 05:28:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 22 05:28:01 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 22 05:28:01 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 22 05:28:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 22 05:28:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 22 05:28:02 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.13( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] r=-1 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1a( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.1a( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.8( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.15( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.9( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.d( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.16( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.4( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.12( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.11( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1b( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1c( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.18( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.1f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.9( v 62'1 lc 0'0 (0'0,62'1] local-lis/les=63/64 n=1 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=62'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.8( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.d( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=63/64 n=1 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.2( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.15( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [2] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 64 pg[11.3( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [2] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.2( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.b( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.19( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.11( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.10( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.6( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.12( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.14( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 64 pg[10.f( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [1] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.1( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.1e( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.17( v 52'16 (0'0,52'16] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.e( v 62'23 lc 62'22 (0'0,62'23] local-lis/les=63/64 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=62'23 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[10.7( v 52'16 (0'0,52'16] local-lis/les=63/64 n=1 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=63) [0] r=0 lpr=63 pi=[59,63)/1 crt=52'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.14( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.6( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[11.10( empty local-lis/les=63/64 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=63) [0] r=0 lpr=63 pi=[61,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 64 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=63/64 n=0 ec=57/47 lis/c=57/57 les/c/f=58/58/0 sis=63) [0] r=0 lpr=63 pi=[57,63)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:02 compute-0 ceph-mon[75840]: pgmap v152: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 0 op/s
Nov 22 05:28:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 05:28:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:28:02 compute-0 ceph-mon[75840]: osdmap e63: 3 total, 3 up, 3 in
Nov 22 05:28:02 compute-0 ceph-mon[75840]: 7.1c scrub starts
Nov 22 05:28:02 compute-0 ceph-mon[75840]: 7.1c scrub ok
Nov 22 05:28:02 compute-0 ceph-mon[75840]: 2.16 scrub starts
Nov 22 05:28:02 compute-0 ceph-mon[75840]: 2.16 scrub ok
Nov 22 05:28:02 compute-0 ceph-mon[75840]: osdmap e64: 3 total, 3 up, 3 in
Nov 22 05:28:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 22 05:28:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 22 05:28:03 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 65 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[59,64)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v156: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 22 05:28:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 05:28:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 05:28:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 22 05:28:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 05:28:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 22 05:28:04 compute-0 ceph-mon[75840]: osdmap e65: 3 total, 3 up, 3 in
Nov 22 05:28:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 05:28:04 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990532875s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.639038086s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990456581s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.639038086s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989582062s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638519287s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.990011215s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989490509s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638519287s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989901543s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989019394s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638381958s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988957405s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638381958s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989553452s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638580322s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988526344s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638214111s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988478661s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638214111s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988895416s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638687134s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988797188s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638687134s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988625526s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638580322s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988069534s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638092041s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.987929344s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638092041s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988601685s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638885498s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988614082s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638900757s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988554955s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638885498s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988540649s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638900757s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.989434242s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988452911s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988288879s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638977051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.979233742s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.630081177s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988225937s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638977051s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.979169846s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.630081177s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.988011360s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.639053345s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.986865044s) [0] async=[0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638015747s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.987811089s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.639053345s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66 pruub=14.986662865s) [0] r=-1 lpr=66 pi=[59,66)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638015747s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 66 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 22 05:28:05 compute-0 ceph-mon[75840]: pgmap v156: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Nov 22 05:28:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 05:28:05 compute-0 ceph-mon[75840]: osdmap e66: 3 total, 3 up, 3 in
Nov 22 05:28:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 22 05:28:05 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 22 05:28:05 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=13.975119591s) [0] async=[0] r=-1 lpr=67 pi=[59,67)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 151.638442993s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:05 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=64/65 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67 pruub=13.975032806s) [0] r=-1 lpr=67 pi=[59,67)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.638442993s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.11( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.b( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.9( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.d( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.3( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1b( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1d( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 67 pg[9.5( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=66) [0] r=0 lpr=66 pi=[59,66)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 260 B/s wr, 0 op/s; 788 B/s, 25 objects/s recovering
Nov 22 05:28:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 22 05:28:06 compute-0 ceph-mon[75840]: osdmap e67: 3 total, 3 up, 3 in
Nov 22 05:28:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 22 05:28:06 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 22 05:28:06 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 68 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=64/59 les/c/f=65/60/0 sis=67) [0] r=0 lpr=67 pi=[59,67)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:07 compute-0 ceph-mon[75840]: pgmap v159: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 260 B/s rd, 260 B/s wr, 0 op/s; 788 B/s, 25 objects/s recovering
Nov 22 05:28:07 compute-0 ceph-mon[75840]: osdmap e68: 3 total, 3 up, 3 in
Nov 22 05:28:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 22 05:28:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 22 05:28:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v161: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 441 B/s wr, 0 op/s; 746 B/s, 21 objects/s recovering
Nov 22 05:28:07 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 22 05:28:07 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 22 05:28:09 compute-0 ceph-mon[75840]: 6.c scrub starts
Nov 22 05:28:09 compute-0 ceph-mon[75840]: 6.c scrub ok
Nov 22 05:28:09 compute-0 ceph-mon[75840]: pgmap v161: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 220 B/s rd, 441 B/s wr, 0 op/s; 746 B/s, 21 objects/s recovering
Nov 22 05:28:09 compute-0 ceph-mon[75840]: 7.11 scrub starts
Nov 22 05:28:09 compute-0 ceph-mon[75840]: 7.11 scrub ok
Nov 22 05:28:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 0 op/s; 576 B/s, 16 objects/s recovering
Nov 22 05:28:11 compute-0 ceph-mon[75840]: pgmap v162: 321 pgs: 15 peering, 306 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 0 op/s; 576 B/s, 16 objects/s recovering
Nov 22 05:28:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 134 B/s rd, 268 B/s wr, 0 op/s; 468 B/s, 14 objects/s recovering
Nov 22 05:28:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 05:28:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 05:28:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 22 05:28:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 05:28:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 05:28:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 22 05:28:12 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 22 05:28:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 22 05:28:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 22 05:28:13 compute-0 ceph-mon[75840]: pgmap v163: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 134 B/s rd, 268 B/s wr, 0 op/s; 468 B/s, 14 objects/s recovering
Nov 22 05:28:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 05:28:13 compute-0 ceph-mon[75840]: osdmap e69: 3 total, 3 up, 3 in
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s; 59 B/s, 1 objects/s recovering
Nov 22 05:28:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 05:28:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 22 05:28:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 05:28:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 22 05:28:14 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 22 05:28:14 compute-0 ceph-mon[75840]: 6.d scrub starts
Nov 22 05:28:14 compute-0 ceph-mon[75840]: 6.d scrub ok
Nov 22 05:28:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 05:28:14 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 22 05:28:14 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 22 05:28:14 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Nov 22 05:28:14 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Nov 22 05:28:14 compute-0 sshd-session[105905]: Accepted publickey for zuul from 192.168.122.30 port 58654 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:28:14 compute-0 systemd-logind[798]: New session 33 of user zuul.
Nov 22 05:28:14 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 22 05:28:14 compute-0 sshd-session[105905]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:28:15 compute-0 ceph-mon[75840]: pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 127 B/s wr, 0 op/s; 59 B/s, 1 objects/s recovering
Nov 22 05:28:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 05:28:15 compute-0 ceph-mon[75840]: osdmap e70: 3 total, 3 up, 3 in
Nov 22 05:28:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 22 05:28:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 05:28:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 05:28:15 compute-0 python3.9[106058]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:28:15 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 22 05:28:15 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 22 05:28:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 22 05:28:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 05:28:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 22 05:28:16 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 3.16 scrub starts
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 3.16 scrub ok
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 6.2 scrub starts
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 6.2 scrub ok
Nov 22 05:28:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 2.13 scrub starts
Nov 22 05:28:16 compute-0 ceph-mon[75840]: 2.13 scrub ok
Nov 22 05:28:16 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 22 05:28:16 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 22 05:28:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:17 compute-0 ceph-mon[75840]: pgmap v167: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 22 05:28:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 05:28:17 compute-0 ceph-mon[75840]: osdmap e71: 3 total, 3 up, 3 in
Nov 22 05:28:17 compute-0 sudo[106274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cknsypisomrcefqdduirnctxjrnbsqcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789297.0262933-32-66843614056568/AnsiballZ_command.py'
Nov 22 05:28:17 compute-0 sudo[106274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:17 compute-0 python3.9[106276]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:28:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 05:28:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 05:28:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 22 05:28:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 05:28:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 22 05:28:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 22 05:28:18 compute-0 ceph-mon[75840]: 4.f scrub starts
Nov 22 05:28:18 compute-0 ceph-mon[75840]: 4.f scrub ok
Nov 22 05:28:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 05:28:18 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.14 deep-scrub starts
Nov 22 05:28:18 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.14 deep-scrub ok
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456052780s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.194747925s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456364632s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195281982s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456313133s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195281982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456233025s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195312500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.456089020s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455579758s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.194747925s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455844879s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 166.195404053s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 72 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72 pruub=14.455754280s) [2] r=-1 lpr=72 pi=[59,72)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.195404053s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 22 05:28:19 compute-0 ceph-mon[75840]: pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 05:28:19 compute-0 ceph-mon[75840]: osdmap e72: 3 total, 3 up, 3 in
Nov 22 05:28:19 compute-0 ceph-mon[75840]: 5.14 deep-scrub starts
Nov 22 05:28:19 compute-0 ceph-mon[75840]: 5.14 deep-scrub ok
Nov 22 05:28:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 22 05:28:19 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=-1 lpr=73 pi=[59,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:19 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 73 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 05:28:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 05:28:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Nov 22 05:28:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Nov 22 05:28:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 22 05:28:20 compute-0 ceph-mon[75840]: osdmap e73: 3 total, 3 up, 3 in
Nov 22 05:28:20 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 05:28:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 05:28:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 22 05:28:20 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 22 05:28:20 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:20 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:20 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:20 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 74 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[59,73)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:20 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 22 05:28:20 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 22 05:28:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 22 05:28:21 compute-0 ceph-mon[75840]: pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:21 compute-0 ceph-mon[75840]: 6.6 deep-scrub starts
Nov 22 05:28:21 compute-0 ceph-mon[75840]: 6.6 deep-scrub ok
Nov 22 05:28:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 05:28:21 compute-0 ceph-mon[75840]: osdmap e74: 3 total, 3 up, 3 in
Nov 22 05:28:21 compute-0 ceph-mon[75840]: 2.18 scrub starts
Nov 22 05:28:21 compute-0 ceph-mon[75840]: 2.18 scrub ok
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.830095291s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.121322632s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829577446s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.121124268s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829159737s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.120864868s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829389572s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.121124268s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829584122s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.121322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.829086304s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.120864868s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.827204704s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 active pruub 175.120880127s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 74 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74 pruub=15.826862335s) [2] r=-1 lpr=74 pi=[66,74)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 175.120880127s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=74) [2] r=0 lpr=74 pi=[66,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 22 05:28:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979733467s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.857635498s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.982757568s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.860687256s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.982709885s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.860687256s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979663849s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.857635498s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979160309s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.857666016s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=73/74 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.979077339s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.857666016s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.976745605s) [2] async=[2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 168.855407715s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:21 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 75 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=73/74 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75 pruub=14.976223946s) [2] r=-1 lpr=75 pi=[59,75)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 168.855407715s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v175: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 05:28:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 22 05:28:21 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 22 05:28:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 22 05:28:21 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 22 05:28:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 22 05:28:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 22 05:28:22 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=-1 lpr=76 pi=[66,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.6( v 55'578 (0'0,55'578] local-lis/les=75/76 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:22 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 76 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=7 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 76 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=73/59 les/c/f=74/60/0 sis=75) [2] r=0 lpr=75 pi=[59,75)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:22 compute-0 ceph-mon[75840]: osdmap e75: 3 total, 3 up, 3 in
Nov 22 05:28:22 compute-0 ceph-mon[75840]: osdmap e76: 3 total, 3 up, 3 in
Nov 22 05:28:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 22 05:28:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 22 05:28:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 22 05:28:23 compute-0 ceph-mon[75840]: pgmap v175: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 05:28:23 compute-0 ceph-mon[75840]: 4.4 scrub starts
Nov 22 05:28:23 compute-0 ceph-mon[75840]: 7.15 scrub starts
Nov 22 05:28:23 compute-0 ceph-mon[75840]: 4.4 scrub ok
Nov 22 05:28:23 compute-0 ceph-mon[75840]: 7.15 scrub ok
Nov 22 05:28:23 compute-0 ceph-mon[75840]: osdmap e77: 3 total, 3 up, 3 in
Nov 22 05:28:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v178: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 05:28:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 77 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=76) [2]/[0] async=[2] r=0 lpr=76 pi=[66,76)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 22 05:28:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 22 05:28:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434611320s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.791946411s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434338570s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.791931152s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.434274673s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.791931152s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.439762115s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.797256470s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438985825s) [2] async=[2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 55'578 active pruub 177.797241211s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438868523s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.797241211s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.433600426s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.791946411s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=76/77 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78 pruub=15.438651085s) [2] r=-1 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 177.797256470s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:24 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 78 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:24 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 22 05:28:24 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 22 05:28:25 compute-0 sudo[106274]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:25 compute-0 sudo[106333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:25 compute-0 sudo[106333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:25 compute-0 sudo[106333]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 22 05:28:25 compute-0 ceph-mon[75840]: pgmap v178: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 05:28:25 compute-0 ceph-mon[75840]: osdmap e78: 3 total, 3 up, 3 in
Nov 22 05:28:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 22 05:28:25 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 22 05:28:25 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:25 compute-0 sudo[106358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:28:25 compute-0 sudo[106358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:25 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.17( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:25 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:25 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 79 pg[9.7( v 55'578 (0'0,55'578] local-lis/les=78/79 n=7 ec=59/49 lis/c=76/66 les/c/f=77/67/0 sis=78) [2] r=0 lpr=78 pi=[66,78)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:25 compute-0 sudo[106358]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:25 compute-0 sshd-session[105908]: Connection closed by 192.168.122.30 port 58654
Nov 22 05:28:25 compute-0 sshd-session[105905]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:28:25 compute-0 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Nov 22 05:28:25 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 22 05:28:25 compute-0 systemd[1]: session-33.scope: Consumed 8.545s CPU time.
Nov 22 05:28:25 compute-0 systemd-logind[798]: Removed session 33.
Nov 22 05:28:25 compute-0 sudo[106383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:25 compute-0 sudo[106383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:25 compute-0 sudo[106383]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:25 compute-0 sudo[106408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:28:25 compute-0 sudo[106408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v181: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:26 compute-0 sudo[106408]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:26 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 10f554f2-6f32-4998-99f2-24393e580269 does not exist
Nov 22 05:28:26 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 356a2ee5-a32b-4d4b-aa48-0cfd1d97143d does not exist
Nov 22 05:28:26 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8e82d58e-75f3-4df9-8731-a15f83e7944b does not exist
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:28:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:28:26 compute-0 sudo[106464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:26 compute-0 sudo[106464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:26 compute-0 sudo[106464]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:26 compute-0 sudo[106489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:28:26 compute-0 sudo[106489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:26 compute-0 sudo[106489]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:26 compute-0 sudo[106514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:26 compute-0 sudo[106514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:26 compute-0 sudo[106514]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:26 compute-0 ceph-mon[75840]: 7.a scrub starts
Nov 22 05:28:26 compute-0 ceph-mon[75840]: 7.a scrub ok
Nov 22 05:28:26 compute-0 ceph-mon[75840]: osdmap e79: 3 total, 3 up, 3 in
Nov 22 05:28:26 compute-0 ceph-mon[75840]: pgmap v181: 321 pgs: 4 unknown, 4 peering, 313 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:28:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:28:26 compute-0 sudo[106539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:28:26 compute-0 sudo[106539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:26 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 22 05:28:26 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 22 05:28:26 compute-0 podman[106605]: 2025-11-22 05:28:26.922073577 +0000 UTC m=+0.062908649 container create 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:26 compute-0 systemd[1]: Started libpod-conmon-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope.
Nov 22 05:28:26 compute-0 podman[106605]: 2025-11-22 05:28:26.890008896 +0000 UTC m=+0.030843998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:27 compute-0 podman[106605]: 2025-11-22 05:28:27.025210515 +0000 UTC m=+0.166045667 container init 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:27 compute-0 podman[106605]: 2025-11-22 05:28:27.037808253 +0000 UTC m=+0.178643315 container start 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:27 compute-0 podman[106605]: 2025-11-22 05:28:27.042510149 +0000 UTC m=+0.183345271 container attach 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:28:27 compute-0 stoic_rosalind[106621]: 167 167
Nov 22 05:28:27 compute-0 systemd[1]: libpod-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope: Deactivated successfully.
Nov 22 05:28:27 compute-0 podman[106605]: 2025-11-22 05:28:27.046025693 +0000 UTC m=+0.186860755 container died 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:28:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4de9313a2c58113291b9deb7115c838f1c0bcb4f6d05225bf9854950e94ebcde-merged.mount: Deactivated successfully.
Nov 22 05:28:27 compute-0 podman[106605]: 2025-11-22 05:28:27.101089661 +0000 UTC m=+0.241924723 container remove 30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:28:27 compute-0 systemd[1]: libpod-conmon-30d9cbe4ad44503e617189b658c4f456d55d777bcf74dcfb4e0a34043139406b.scope: Deactivated successfully.
Nov 22 05:28:27 compute-0 podman[106643]: 2025-11-22 05:28:27.312134904 +0000 UTC m=+0.046498859 container create 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:28:27 compute-0 systemd[1]: Started libpod-conmon-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope.
Nov 22 05:28:27 compute-0 podman[106643]: 2025-11-22 05:28:27.291158191 +0000 UTC m=+0.025522176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:27 compute-0 podman[106643]: 2025-11-22 05:28:27.431720952 +0000 UTC m=+0.166084977 container init 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:28:27 compute-0 podman[106643]: 2025-11-22 05:28:27.44841524 +0000 UTC m=+0.182779215 container start 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:28:27 compute-0 podman[106643]: 2025-11-22 05:28:27.452157901 +0000 UTC m=+0.186521906 container attach 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:28:27 compute-0 ceph-mon[75840]: 4.1a scrub starts
Nov 22 05:28:27 compute-0 ceph-mon[75840]: 4.1a scrub ok
Nov 22 05:28:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 725 B/s wr, 35 op/s; 194 B/s, 8 objects/s recovering
Nov 22 05:28:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 05:28:27 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 05:28:27 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 22 05:28:27 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 22 05:28:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 22 05:28:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 22 05:28:28 compute-0 hungry_edison[106659]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:28:28 compute-0 hungry_edison[106659]: --> relative data size: 1.0
Nov 22 05:28:28 compute-0 hungry_edison[106659]: --> All data devices are unavailable
Nov 22 05:28:28 compute-0 systemd[1]: libpod-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Deactivated successfully.
Nov 22 05:28:28 compute-0 systemd[1]: libpod-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Consumed 1.004s CPU time.
Nov 22 05:28:28 compute-0 podman[106643]: 2025-11-22 05:28:28.503774938 +0000 UTC m=+1.238138873 container died 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ab281ea424810cd4b21954b1381f8078e82f238614b29a649b808fa26f44c57-merged.mount: Deactivated successfully.
Nov 22 05:28:28 compute-0 podman[106643]: 2025-11-22 05:28:28.590437424 +0000 UTC m=+1.324801359 container remove 42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:28:28 compute-0 systemd[1]: libpod-conmon-42e16ffd8047af7cab293392775a3695422502d93ac444480f950325020c7483.scope: Deactivated successfully.
Nov 22 05:28:28 compute-0 sudo[106539]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 22 05:28:28 compute-0 ceph-mon[75840]: pgmap v182: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 725 B/s wr, 35 op/s; 194 B/s, 8 objects/s recovering
Nov 22 05:28:28 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 05:28:28 compute-0 ceph-mon[75840]: 6.4 scrub starts
Nov 22 05:28:28 compute-0 ceph-mon[75840]: 6.4 scrub ok
Nov 22 05:28:28 compute-0 ceph-mon[75840]: 2.11 scrub starts
Nov 22 05:28:28 compute-0 ceph-mon[75840]: 2.11 scrub ok
Nov 22 05:28:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 05:28:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 22 05:28:28 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 22 05:28:28 compute-0 sudo[106699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:28 compute-0 sudo[106699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:28 compute-0 sudo[106699]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:28 compute-0 sudo[106724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:28:28 compute-0 sudo[106724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:28 compute-0 sudo[106724]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:28 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 22 05:28:28 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 22 05:28:28 compute-0 sudo[106749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:28 compute-0 sudo[106749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:28 compute-0 sudo[106749]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 22 05:28:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 22 05:28:28 compute-0 sudo[106774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:28:28 compute-0 sudo[106774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.351384251 +0000 UTC m=+0.042035909 container create cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:28:29 compute-0 systemd[1]: Started libpod-conmon-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope.
Nov 22 05:28:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.334618731 +0000 UTC m=+0.025270399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.443432321 +0000 UTC m=+0.134084059 container init cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.456030029 +0000 UTC m=+0.146681687 container start cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.461699581 +0000 UTC m=+0.152351319 container attach cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:28:29 compute-0 vigilant_ishizaka[106857]: 167 167
Nov 22 05:28:29 compute-0 systemd[1]: libpod-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope: Deactivated successfully.
Nov 22 05:28:29 compute-0 conmon[106857]: conmon cc1c4f0da7f0944371ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope/container/memory.events
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.46384839 +0000 UTC m=+0.154500068 container died cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 05:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-678af39e56e29a5fac3bbe15f45436e4d6fbfb7c58fdeb685d2bb9cef52c10df-merged.mount: Deactivated successfully.
Nov 22 05:28:29 compute-0 podman[106841]: 2025-11-22 05:28:29.512163306 +0000 UTC m=+0.202814984 container remove cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ishizaka, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:28:29 compute-0 systemd[1]: libpod-conmon-cc1c4f0da7f0944371ce4c87f4e91cf095af27e8ab9e3d06a0ed6459cc1076cd.scope: Deactivated successfully.
Nov 22 05:28:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 05:28:29 compute-0 ceph-mon[75840]: osdmap e80: 3 total, 3 up, 3 in
Nov 22 05:28:29 compute-0 ceph-mon[75840]: 6.f scrub starts
Nov 22 05:28:29 compute-0 ceph-mon[75840]: 6.f scrub ok
Nov 22 05:28:29 compute-0 ceph-mon[75840]: 2.f scrub starts
Nov 22 05:28:29 compute-0 ceph-mon[75840]: 2.f scrub ok
Nov 22 05:28:29 compute-0 podman[106878]: 2025-11-22 05:28:29.686091352 +0000 UTC m=+0.045010478 container create 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:28:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 32 op/s; 183 B/s, 7 objects/s recovering
Nov 22 05:28:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 05:28:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 05:28:29 compute-0 systemd[1]: Started libpod-conmon-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope.
Nov 22 05:28:29 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 22 05:28:29 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 22 05:28:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:29 compute-0 podman[106878]: 2025-11-22 05:28:29.668947952 +0000 UTC m=+0.027867078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:29 compute-0 podman[106878]: 2025-11-22 05:28:29.776128579 +0000 UTC m=+0.135047735 container init 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:28:29 compute-0 podman[106878]: 2025-11-22 05:28:29.788095479 +0000 UTC m=+0.147014625 container start 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:29 compute-0 podman[106878]: 2025-11-22 05:28:29.793053583 +0000 UTC m=+0.151972779 container attach 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:28:30 compute-0 kind_darwin[106895]: {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     "0": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "devices": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "/dev/loop3"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             ],
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_name": "ceph_lv0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_size": "21470642176",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "name": "ceph_lv0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "tags": {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_name": "ceph",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.crush_device_class": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.encrypted": "0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_id": "0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.vdo": "0"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             },
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "vg_name": "ceph_vg0"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         }
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     ],
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     "1": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "devices": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "/dev/loop4"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             ],
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_name": "ceph_lv1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_size": "21470642176",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "name": "ceph_lv1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "tags": {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_name": "ceph",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.crush_device_class": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.encrypted": "0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_id": "1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.vdo": "0"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             },
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "vg_name": "ceph_vg1"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         }
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     ],
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     "2": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "devices": [
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "/dev/loop5"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             ],
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_name": "ceph_lv2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_size": "21470642176",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "name": "ceph_lv2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "tags": {
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.cluster_name": "ceph",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.crush_device_class": "",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.encrypted": "0",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osd_id": "2",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:                 "ceph.vdo": "0"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             },
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "type": "block",
Nov 22 05:28:30 compute-0 kind_darwin[106895]:             "vg_name": "ceph_vg2"
Nov 22 05:28:30 compute-0 kind_darwin[106895]:         }
Nov 22 05:28:30 compute-0 kind_darwin[106895]:     ]
Nov 22 05:28:30 compute-0 kind_darwin[106895]: }
Nov 22 05:28:30 compute-0 systemd[1]: libpod-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope: Deactivated successfully.
Nov 22 05:28:30 compute-0 conmon[106895]: conmon 0d22ebd2753127d44570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope/container/memory.events
Nov 22 05:28:30 compute-0 podman[106878]: 2025-11-22 05:28:30.579154135 +0000 UTC m=+0.938073331 container died 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048869133s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 174.195495605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048822403s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.195495605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048466682s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 174.195465088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 80 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80 pruub=11.048411369s) [2] r=-1 lpr=80 pi=[59,80)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.195465088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2ef184b02b0ee40c43317ab7150f6b4dda5dbcfe22592d7cd45b73272c37380-merged.mount: Deactivated successfully.
Nov 22 05:28:30 compute-0 podman[106878]: 2025-11-22 05:28:30.644227242 +0000 UTC m=+1.003146358 container remove 0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:28:30 compute-0 systemd[1]: libpod-conmon-0d22ebd2753127d44570ae6c6ea16931b829200fd58df84839490159190736e1.scope: Deactivated successfully.
Nov 22 05:28:30 compute-0 sudo[106774]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 22 05:28:30 compute-0 ceph-mon[75840]: pgmap v184: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 682 B/s wr, 32 op/s; 183 B/s, 7 objects/s recovering
Nov 22 05:28:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 05:28:30 compute-0 ceph-mon[75840]: 6.1 scrub starts
Nov 22 05:28:30 compute-0 ceph-mon[75840]: 6.1 scrub ok
Nov 22 05:28:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 05:28:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 22 05:28:30 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[59,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 81 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:30 compute-0 sudo[106918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:30 compute-0 sudo[106918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:30 compute-0 sudo[106918]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:30 compute-0 sudo[106943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:28:30 compute-0 sudo[106943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:30 compute-0 sudo[106943]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:30 compute-0 sudo[106968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:30 compute-0 sudo[106968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:30 compute-0 sudo[106968]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:31 compute-0 sudo[106993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:28:31 compute-0 sudo[106993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.42007865 +0000 UTC m=+0.062878988 container create cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:28:31 compute-0 systemd[1]: Started libpod-conmon-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope.
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.392321955 +0000 UTC m=+0.035122353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.513411184 +0000 UTC m=+0.156211532 container init cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.520573467 +0000 UTC m=+0.163373805 container start cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.524563793 +0000 UTC m=+0.167364201 container attach cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:28:31 compute-0 festive_feynman[107074]: 167 167
Nov 22 05:28:31 compute-0 systemd[1]: libpod-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope: Deactivated successfully.
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.527903723 +0000 UTC m=+0.170704061 container died cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aaeb76da7b80f8ad64f3849e27b55f20b33c73ac1799d76f2b3a5ab1e7631a6-merged.mount: Deactivated successfully.
Nov 22 05:28:31 compute-0 podman[107058]: 2025-11-22 05:28:31.579274761 +0000 UTC m=+0.222075089 container remove cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:28:31 compute-0 systemd[1]: libpod-conmon-cf58d7e3b43ab4aaac366e4c74701147728a5305439a86a428beb633519ddc4b.scope: Deactivated successfully.
Nov 22 05:28:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 22 05:28:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 05:28:31 compute-0 ceph-mon[75840]: osdmap e81: 3 total, 3 up, 3 in
Nov 22 05:28:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 485 B/s wr, 31 op/s; 173 B/s, 7 objects/s recovering
Nov 22 05:28:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 05:28:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 05:28:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 22 05:28:31 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 22 05:28:31 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 82 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:31 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 82 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[59,81)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:31 compute-0 podman[107099]: 2025-11-22 05:28:31.821702057 +0000 UTC m=+0.067334808 container create 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:28:31 compute-0 systemd[1]: Started libpod-conmon-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope.
Nov 22 05:28:31 compute-0 podman[107099]: 2025-11-22 05:28:31.792457952 +0000 UTC m=+0.038090743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:28:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:28:31 compute-0 podman[107099]: 2025-11-22 05:28:31.932538481 +0000 UTC m=+0.178171222 container init 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:28:31 compute-0 podman[107099]: 2025-11-22 05:28:31.947309357 +0000 UTC m=+0.192942098 container start 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:28:31 compute-0 podman[107099]: 2025-11-22 05:28:31.951499559 +0000 UTC m=+0.197132290 container attach 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:28:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 22 05:28:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 22 05:28:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 22 05:28:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 05:28:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 22 05:28:32 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 22 05:28:32 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.736738205s) [2] async=[2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 180.362548828s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:32 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=81/82 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.736606598s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.362548828s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:32 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.732250214s) [2] async=[2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 180.358383179s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:32 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=81/82 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83 pruub=15.732167244s) [2] r=-1 lpr=83 pi=[59,83)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.358383179s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:32 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:32 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:32 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:32 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 83 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 22 05:28:32 compute-0 ceph-mon[75840]: pgmap v186: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 485 B/s wr, 31 op/s; 173 B/s, 7 objects/s recovering
Nov 22 05:28:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 05:28:32 compute-0 ceph-mon[75840]: osdmap e82: 3 total, 3 up, 3 in
Nov 22 05:28:32 compute-0 ceph-mon[75840]: 5.7 scrub starts
Nov 22 05:28:32 compute-0 ceph-mon[75840]: 5.7 scrub ok
Nov 22 05:28:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 05:28:32 compute-0 ceph-mon[75840]: osdmap e83: 3 total, 3 up, 3 in
Nov 22 05:28:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 22 05:28:32 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 22 05:28:32 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 22 05:28:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 22 05:28:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]: {
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_id": 1,
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "type": "bluestore"
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     },
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_id": 2,
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "type": "bluestore"
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     },
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_id": 0,
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:         "type": "bluestore"
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]:     }
Nov 22 05:28:33 compute-0 eloquent_feistel[107115]: }
Nov 22 05:28:33 compute-0 systemd[1]: libpod-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Deactivated successfully.
Nov 22 05:28:33 compute-0 systemd[1]: libpod-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Consumed 1.101s CPU time.
Nov 22 05:28:33 compute-0 podman[107099]: 2025-11-22 05:28:33.043694935 +0000 UTC m=+1.289327676 container died 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:28:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 22 05:28:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 22 05:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab9ded5844186c3ff5611d0fdc6bf688ed84dad3eb9bc959b87dc92b5b59099c-merged.mount: Deactivated successfully.
Nov 22 05:28:33 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 22 05:28:33 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 84 pg[9.18( v 55'578 (0'0,55'578] local-lis/les=83/84 n=6 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:33 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 84 pg[9.8( v 55'578 (0'0,55'578] local-lis/les=83/84 n=7 ec=59/49 lis/c=81/59 les/c/f=82/60/0 sis=83) [2] r=0 lpr=83 pi=[59,83)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:33 compute-0 podman[107099]: 2025-11-22 05:28:33.11877826 +0000 UTC m=+1.364410971 container remove 112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:28:33 compute-0 systemd[1]: libpod-conmon-112f27475961786d0c21903721e73cb5bde9fc21ca1d80ad38087e8de435bd45.scope: Deactivated successfully.
Nov 22 05:28:33 compute-0 sudo[106993]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:28:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:28:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 913c5b84-5e41-4ac0-9590-6038787f0d3d does not exist
Nov 22 05:28:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5116423c-4e8f-4761-94a7-57e9ab8367ce does not exist
Nov 22 05:28:33 compute-0 sudo[107163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:28:33 compute-0 sudo[107163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:33 compute-0 sudo[107163]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:33 compute-0 sudo[107188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:28:33 compute-0 sudo[107188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:28:33 compute-0 sudo[107188]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 05:28:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 6.e scrub starts
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 6.e scrub ok
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 4.e scrub starts
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 4.e scrub ok
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 5.15 scrub starts
Nov 22 05:28:33 compute-0 ceph-mon[75840]: 5.15 scrub ok
Nov 22 05:28:33 compute-0 ceph-mon[75840]: osdmap e84: 3 total, 3 up, 3 in
Nov 22 05:28:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:28:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 05:28:33 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 22 05:28:33 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 22 05:28:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 22 05:28:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 05:28:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 22 05:28:34 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 22 05:28:34 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Nov 22 05:28:34 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Nov 22 05:28:35 compute-0 ceph-mon[75840]: pgmap v190: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:35 compute-0 ceph-mon[75840]: 6.8 scrub starts
Nov 22 05:28:35 compute-0 ceph-mon[75840]: 6.8 scrub ok
Nov 22 05:28:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 05:28:35 compute-0 ceph-mon[75840]: osdmap e85: 3 total, 3 up, 3 in
Nov 22 05:28:35 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Nov 22 05:28:35 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Nov 22 05:28:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 05:28:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 05:28:35 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Nov 22 05:28:35 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Nov 22 05:28:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 22 05:28:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 05:28:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 22 05:28:36 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 22 05:28:36 compute-0 ceph-mon[75840]: 4.a deep-scrub starts
Nov 22 05:28:36 compute-0 ceph-mon[75840]: 4.a deep-scrub ok
Nov 22 05:28:36 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 05:28:36 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246051788s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 182.195312500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:36 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.245971680s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.195312500s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:36 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246371269s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 182.195816040s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:36 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 86 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86 pruub=13.246341705s) [2] r=-1 lpr=86 pi=[59,86)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.195816040s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:36 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 86 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:36 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 86 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 22 05:28:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 22 05:28:37 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 22 05:28:37 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:37 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:37 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:37 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 87 pg[9.c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=-1 lpr=87 pi=[59,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:37 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:37 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:37 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:37 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 87 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:37 compute-0 ceph-mon[75840]: 6.b deep-scrub starts
Nov 22 05:28:37 compute-0 ceph-mon[75840]: 6.b deep-scrub ok
Nov 22 05:28:37 compute-0 ceph-mon[75840]: pgmap v192: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:37 compute-0 ceph-mon[75840]: 6.14 scrub starts
Nov 22 05:28:37 compute-0 ceph-mon[75840]: 6.14 scrub ok
Nov 22 05:28:37 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 05:28:37 compute-0 ceph-mon[75840]: osdmap e86: 3 total, 3 up, 3 in
Nov 22 05:28:37 compute-0 ceph-mon[75840]: osdmap e87: 3 total, 3 up, 3 in
Nov 22 05:28:37 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 22 05:28:37 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 22 05:28:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 3 objects/s recovering
Nov 22 05:28:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 22 05:28:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 22 05:28:38 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 22 05:28:38 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 22 05:28:38 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 22 05:28:38 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 88 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:38 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 88 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87) [2]/[1] async=[2] r=0 lpr=87 pi=[59,87)/1 crt=55'578 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:38 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Nov 22 05:28:38 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Nov 22 05:28:39 compute-0 ceph-mon[75840]: 4.9 scrub starts
Nov 22 05:28:39 compute-0 ceph-mon[75840]: 4.9 scrub ok
Nov 22 05:28:39 compute-0 ceph-mon[75840]: pgmap v195: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 3 objects/s recovering
Nov 22 05:28:39 compute-0 ceph-mon[75840]: osdmap e88: 3 total, 3 up, 3 in
Nov 22 05:28:39 compute-0 ceph-mon[75840]: 2.2 scrub starts
Nov 22 05:28:39 compute-0 ceph-mon[75840]: 2.2 scrub ok
Nov 22 05:28:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 22 05:28:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 22 05:28:39 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 22 05:28:39 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:39 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:39 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:39 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:39 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.161184311s) [2] async=[2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 186.946456909s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:39 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.161073685s) [2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.946456909s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:39 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.156772614s) [2] async=[2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 active pruub 186.942230225s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:39 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 89 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=87/88 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89 pruub=15.156560898s) [2] r=-1 lpr=89 pi=[59,89)/1 crt=55'578 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.942230225s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Nov 22 05:28:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Nov 22 05:28:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 22 05:28:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 22 05:28:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 22 05:28:40 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 22 05:28:40 compute-0 ceph-mon[75840]: 4.8 deep-scrub starts
Nov 22 05:28:40 compute-0 ceph-mon[75840]: 4.8 deep-scrub ok
Nov 22 05:28:40 compute-0 ceph-mon[75840]: osdmap e89: 3 total, 3 up, 3 in
Nov 22 05:28:40 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 90 pg[9.c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=7 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:40 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 90 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=87/59 les/c/f=88/60/0 sis=89) [2] r=0 lpr=89 pi=[59,89)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:40 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Nov 22 05:28:40 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Nov 22 05:28:41 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 22 05:28:41 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 22 05:28:41 compute-0 sshd-session[107213]: Accepted publickey for zuul from 192.168.122.30 port 44616 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:28:41 compute-0 systemd-logind[798]: New session 34 of user zuul.
Nov 22 05:28:41 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 22 05:28:41 compute-0 sshd-session[107213]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:28:41 compute-0 ceph-mon[75840]: 4.5 deep-scrub starts
Nov 22 05:28:41 compute-0 ceph-mon[75840]: 4.5 deep-scrub ok
Nov 22 05:28:41 compute-0 ceph-mon[75840]: pgmap v198: 321 pgs: 2 unknown, 319 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 3 objects/s recovering
Nov 22 05:28:41 compute-0 ceph-mon[75840]: osdmap e90: 3 total, 3 up, 3 in
Nov 22 05:28:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Nov 22 05:28:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Nov 22 05:28:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 220 B/s wr, 20 op/s; 23 B/s, 2 objects/s recovering
Nov 22 05:28:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 05:28:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 05:28:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:42 compute-0 python3.9[107366]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 05:28:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 22 05:28:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 22 05:28:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 22 05:28:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 05:28:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 22 05:28:42 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 22 05:28:42 compute-0 ceph-mon[75840]: 6.15 scrub starts
Nov 22 05:28:42 compute-0 ceph-mon[75840]: 6.15 scrub ok
Nov 22 05:28:42 compute-0 ceph-mon[75840]: 5.5 scrub starts
Nov 22 05:28:42 compute-0 ceph-mon[75840]: 5.5 scrub ok
Nov 22 05:28:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 05:28:43 compute-0 ceph-mon[75840]: 6.17 deep-scrub starts
Nov 22 05:28:43 compute-0 ceph-mon[75840]: 6.17 deep-scrub ok
Nov 22 05:28:43 compute-0 ceph-mon[75840]: pgmap v200: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 220 B/s wr, 20 op/s; 23 B/s, 2 objects/s recovering
Nov 22 05:28:43 compute-0 ceph-mon[75840]: 5.4 scrub starts
Nov 22 05:28:43 compute-0 ceph-mon[75840]: 5.4 scrub ok
Nov 22 05:28:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 05:28:43 compute-0 ceph-mon[75840]: osdmap e91: 3 total, 3 up, 3 in
Nov 22 05:28:43 compute-0 python3.9[107540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:28:43
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.mgr']
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 181 B/s wr, 16 op/s; 19 B/s, 2 objects/s recovering
Nov 22 05:28:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 05:28:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:28:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:28:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 22 05:28:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 22 05:28:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 22 05:28:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 05:28:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 05:28:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 22 05:28:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 22 05:28:44 compute-0 sudo[107694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjuhhegcgmnzlvyebltbwpqhtuakhtzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789323.971769-45-3463314329303/AnsiballZ_command.py'
Nov 22 05:28:44 compute-0 sudo[107694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:44 compute-0 python3.9[107696]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:28:44 compute-0 sudo[107694]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:44 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Nov 22 05:28:44 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Nov 22 05:28:45 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 22 05:28:45 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 22 05:28:45 compute-0 ceph-mon[75840]: pgmap v202: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 181 B/s wr, 16 op/s; 19 B/s, 2 objects/s recovering
Nov 22 05:28:45 compute-0 ceph-mon[75840]: 4.13 scrub starts
Nov 22 05:28:45 compute-0 ceph-mon[75840]: 4.13 scrub ok
Nov 22 05:28:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 05:28:45 compute-0 ceph-mon[75840]: osdmap e92: 3 total, 3 up, 3 in
Nov 22 05:28:45 compute-0 sudo[107847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbjqqmydjilhyjphupqwfqdtxvwubtnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789325.0155277-57-153742687173488/AnsiballZ_stat.py'
Nov 22 05:28:45 compute-0 sudo[107847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Nov 22 05:28:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 05:28:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 05:28:45 compute-0 python3.9[107849]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:28:45 compute-0 sudo[107847]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:45 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Nov 22 05:28:45 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Nov 22 05:28:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 22 05:28:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 22 05:28:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 22 05:28:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 05:28:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 22 05:28:46 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 22 05:28:46 compute-0 ceph-mon[75840]: 6.11 scrub starts
Nov 22 05:28:46 compute-0 ceph-mon[75840]: 6.11 scrub ok
Nov 22 05:28:46 compute-0 ceph-mon[75840]: 2.8 scrub starts
Nov 22 05:28:46 compute-0 ceph-mon[75840]: 2.8 scrub ok
Nov 22 05:28:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 05:28:46 compute-0 sudo[108001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bymeclpwkjcckwhrmwbovkrgkaqiusws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789326.086129-68-216083504829882/AnsiballZ_file.py'
Nov 22 05:28:46 compute-0 sudo[108001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:46 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 22 05:28:46 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 22 05:28:46 compute-0 python3.9[108003]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:28:46 compute-0 sudo[108001]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:47 compute-0 ceph-mon[75840]: pgmap v204: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Nov 22 05:28:47 compute-0 ceph-mon[75840]: 6.13 scrub starts
Nov 22 05:28:47 compute-0 ceph-mon[75840]: 6.13 scrub ok
Nov 22 05:28:47 compute-0 ceph-mon[75840]: 5.3 scrub starts
Nov 22 05:28:47 compute-0 ceph-mon[75840]: 5.3 scrub ok
Nov 22 05:28:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 05:28:47 compute-0 ceph-mon[75840]: osdmap e93: 3 total, 3 up, 3 in
Nov 22 05:28:47 compute-0 sudo[108153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgsaqayjqclgewvispibankpqrrosysc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789327.0571754-77-174584154727146/AnsiballZ_file.py'
Nov 22 05:28:47 compute-0 sudo[108153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:47 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 22 05:28:47 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 22 05:28:47 compute-0 python3.9[108155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:28:47 compute-0 sudo[108153]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 22 05:28:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 05:28:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 22 05:28:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 05:28:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 22 05:28:48 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 22 05:28:48 compute-0 ceph-mon[75840]: 4.14 scrub starts
Nov 22 05:28:48 compute-0 ceph-mon[75840]: 4.14 scrub ok
Nov 22 05:28:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 05:28:48 compute-0 python3.9[108305]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:28:48 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Nov 22 05:28:48 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Nov 22 05:28:48 compute-0 network[108322]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:28:48 compute-0 network[108323]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:28:48 compute-0 network[108324]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:28:48 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 22 05:28:48 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 22 05:28:49 compute-0 ceph-mon[75840]: 4.10 scrub starts
Nov 22 05:28:49 compute-0 ceph-mon[75840]: 4.10 scrub ok
Nov 22 05:28:49 compute-0 ceph-mon[75840]: pgmap v206: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 05:28:49 compute-0 ceph-mon[75840]: osdmap e94: 3 total, 3 up, 3 in
Nov 22 05:28:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Nov 22 05:28:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Nov 22 05:28:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 22 05:28:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 05:28:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 22 05:28:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 22 05:28:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 22 05:28:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 05:28:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 22 05:28:50 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 22 05:28:50 compute-0 ceph-mon[75840]: 6.1d scrub starts
Nov 22 05:28:50 compute-0 ceph-mon[75840]: 6.1d scrub ok
Nov 22 05:28:50 compute-0 ceph-mon[75840]: 4.1c scrub starts
Nov 22 05:28:50 compute-0 ceph-mon[75840]: 4.1c scrub ok
Nov 22 05:28:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 05:28:51 compute-0 ceph-mon[75840]: 6.1c scrub starts
Nov 22 05:28:51 compute-0 ceph-mon[75840]: 6.1c scrub ok
Nov 22 05:28:51 compute-0 ceph-mon[75840]: pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:51 compute-0 ceph-mon[75840]: 2.1c scrub starts
Nov 22 05:28:51 compute-0 ceph-mon[75840]: 2.1c scrub ok
Nov 22 05:28:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 05:28:51 compute-0 ceph-mon[75840]: osdmap e95: 3 total, 3 up, 3 in
Nov 22 05:28:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 22 05:28:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 05:28:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Nov 22 05:28:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Nov 22 05:28:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 22 05:28:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 05:28:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 05:28:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 22 05:28:52 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:28:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:28:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 22 05:28:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 22 05:28:53 compute-0 ceph-mon[75840]: pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:53 compute-0 ceph-mon[75840]: 6.1f scrub starts
Nov 22 05:28:53 compute-0 ceph-mon[75840]: 6.1f scrub ok
Nov 22 05:28:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 05:28:53 compute-0 ceph-mon[75840]: osdmap e96: 3 total, 3 up, 3 in
Nov 22 05:28:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 22 05:28:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 05:28:54 compute-0 python3.9[108584]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:28:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 22 05:28:54 compute-0 ceph-mon[75840]: 2.1d scrub starts
Nov 22 05:28:54 compute-0 ceph-mon[75840]: 2.1d scrub ok
Nov 22 05:28:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 05:28:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 05:28:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 22 05:28:54 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 22 05:28:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 22 05:28:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 22 05:28:55 compute-0 python3.9[108734]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:28:55 compute-0 ceph-mon[75840]: pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 05:28:55 compute-0 ceph-mon[75840]: osdmap e97: 3 total, 3 up, 3 in
Nov 22 05:28:55 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 22 05:28:55 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 22 05:28:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 22 05:28:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 05:28:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 22 05:28:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 22 05:28:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 22 05:28:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 05:28:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 22 05:28:56 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 22 05:28:56 compute-0 ceph-mon[75840]: 4.7 scrub starts
Nov 22 05:28:56 compute-0 ceph-mon[75840]: 4.7 scrub ok
Nov 22 05:28:56 compute-0 ceph-mon[75840]: pgmap v214: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 05:28:56 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 22 05:28:56 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 22 05:28:56 compute-0 python3.9[108888]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:28:56 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 22 05:28:56 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 22 05:28:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 97 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=12.134371758s) [2] r=-1 lpr=97 pi=[66,97)/1 crt=55'578 mlcod 0'0 active pruub 207.122039795s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:56 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 98 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97 pruub=12.134313583s) [2] r=-1 lpr=97 pi=[66,97)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 207.122039795s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:56 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 98 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=97) [2] r=0 lpr=98 pi=[66,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:28:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 22 05:28:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 22 05:28:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 22 05:28:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 99 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:57 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 99 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=-1 lpr=99 pi=[66,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 99 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:57 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 99 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 22 05:28:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 22 05:28:57 compute-0 ceph-mon[75840]: 4.12 scrub starts
Nov 22 05:28:57 compute-0 ceph-mon[75840]: 4.12 scrub ok
Nov 22 05:28:57 compute-0 ceph-mon[75840]: 2.1f scrub starts
Nov 22 05:28:57 compute-0 ceph-mon[75840]: 2.1f scrub ok
Nov 22 05:28:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 05:28:57 compute-0 ceph-mon[75840]: osdmap e98: 3 total, 3 up, 3 in
Nov 22 05:28:57 compute-0 ceph-mon[75840]: 2.1b scrub starts
Nov 22 05:28:57 compute-0 ceph-mon[75840]: osdmap e99: 3 total, 3 up, 3 in
Nov 22 05:28:57 compute-0 sudo[109044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvqmzqmufqikqftoxbixsvivejucebl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789337.152826-125-84895499051984/AnsiballZ_setup.py'
Nov 22 05:28:57 compute-0 sudo[109044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 22 05:28:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 05:28:57 compute-0 python3.9[109046]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:28:57 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 22 05:28:57 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 22 05:28:58 compute-0 sudo[109044]: pam_unix(sudo:session): session closed for user root
Nov 22 05:28:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 22 05:28:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 05:28:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 22 05:28:58 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 22 05:28:58 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100 pruub=11.041053772s) [1] r=-1 lpr=100 pi=[66,100)/1 crt=55'578 mlcod 0'0 active pruub 207.121765137s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:58 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100 pruub=11.040739059s) [1] r=-1 lpr=100 pi=[66,100)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 207.121765137s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:58 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=100) [1] r=0 lpr=100 pi=[66,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:58 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 100 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=99) [2]/[0] async=[2] r=0 lpr=99 pi=[66,99)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:28:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 22 05:28:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 22 05:28:58 compute-0 ceph-mon[75840]: 2.1b scrub ok
Nov 22 05:28:58 compute-0 ceph-mon[75840]: 4.1b scrub starts
Nov 22 05:28:58 compute-0 ceph-mon[75840]: 4.1b scrub ok
Nov 22 05:28:58 compute-0 ceph-mon[75840]: 3.17 scrub starts
Nov 22 05:28:58 compute-0 ceph-mon[75840]: 3.17 scrub ok
Nov 22 05:28:58 compute-0 ceph-mon[75840]: pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 05:28:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 05:28:58 compute-0 ceph-mon[75840]: osdmap e100: 3 total, 3 up, 3 in
Nov 22 05:28:58 compute-0 sudo[109128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stotbjegnxylswdfouxherpkyhioflyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789337.152826-125-84895499051984/AnsiballZ_dnf.py'
Nov 22 05:28:58 compute-0 sudo[109128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:28:58 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Nov 22 05:28:58 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Nov 22 05:28:58 compute-0 python3.9[109130]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:28:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 22 05:28:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 22 05:28:59 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 22 05:28:59 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:59 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=66/67 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:59 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101 pruub=15.007973671s) [2] async=[2] r=-1 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 55'578 active pruub 212.097778320s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:59 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=99/100 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101 pruub=15.007735252s) [2] r=-1 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 212.097778320s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:59 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[66,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:59 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[66,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:28:59 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:28:59 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 101 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:28:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 22 05:28:59 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 22 05:28:59 compute-0 ceph-mon[75840]: 4.11 scrub starts
Nov 22 05:28:59 compute-0 ceph-mon[75840]: 4.11 scrub ok
Nov 22 05:28:59 compute-0 ceph-mon[75840]: 7.13 scrub starts
Nov 22 05:28:59 compute-0 ceph-mon[75840]: 7.13 scrub ok
Nov 22 05:28:59 compute-0 ceph-mon[75840]: osdmap e101: 3 total, 3 up, 3 in
Nov 22 05:28:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:28:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 22 05:28:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 05:29:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 22 05:29:00 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 05:29:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 22 05:29:00 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 22 05:29:00 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=9.984745026s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'578 mlcod 0'0 active pruub 196.532760620s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:00 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102 pruub=9.984657288s) [0] r=-1 lpr=102 pi=[75,102)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 196.532760620s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:00 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=102) [0] r=0 lpr=102 pi=[75,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:00 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 102 pg[9.13( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=99/66 les/c/f=100/67/0 sis=101) [2] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:00 compute-0 ceph-mon[75840]: 5.1d deep-scrub starts
Nov 22 05:29:00 compute-0 ceph-mon[75840]: 5.1d deep-scrub ok
Nov 22 05:29:00 compute-0 ceph-mon[75840]: 3.15 scrub starts
Nov 22 05:29:00 compute-0 ceph-mon[75840]: 3.15 scrub ok
Nov 22 05:29:00 compute-0 ceph-mon[75840]: pgmap v220: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 05:29:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 05:29:00 compute-0 ceph-mon[75840]: osdmap e102: 3 total, 3 up, 3 in
Nov 22 05:29:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 22 05:29:00 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 102 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=66/66 les/c/f=67/67/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[66,101)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 22 05:29:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 22 05:29:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 22 05:29:01 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 22 05:29:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103 pruub=15.462775230s) [1] async=[1] r=-1 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 55'578 active pruub 214.564346313s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=101/102 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103 pruub=15.462686539s) [1] r=-1 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 214.564346313s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:01 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[75,103)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:01 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 103 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 103 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:01 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 103 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 22 05:29:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 22 05:29:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 22 05:29:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 22 05:29:02 compute-0 ceph-mon[75840]: 2.17 scrub starts
Nov 22 05:29:02 compute-0 ceph-mon[75840]: 2.17 scrub ok
Nov 22 05:29:02 compute-0 ceph-mon[75840]: osdmap e103: 3 total, 3 up, 3 in
Nov 22 05:29:02 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 22 05:29:02 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 104 pg[9.15( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=101/66 les/c/f=102/67/0 sis=103) [1] r=0 lpr=103 pi=[66,103)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:02 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 22 05:29:02 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 22 05:29:02 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 104 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[75,103)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 22 05:29:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 22 05:29:02 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 22 05:29:02 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 22 05:29:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 22 05:29:03 compute-0 ceph-mon[75840]: pgmap v223: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:03 compute-0 ceph-mon[75840]: 3.11 scrub starts
Nov 22 05:29:03 compute-0 ceph-mon[75840]: 3.11 scrub ok
Nov 22 05:29:03 compute-0 ceph-mon[75840]: osdmap e104: 3 total, 3 up, 3 in
Nov 22 05:29:03 compute-0 ceph-mon[75840]: 3.12 scrub starts
Nov 22 05:29:03 compute-0 ceph-mon[75840]: 3.12 scrub ok
Nov 22 05:29:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 22 05:29:03 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 22 05:29:03 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.042300224s) [0] async=[0] r=-1 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 55'578 active pruub 204.632476807s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:03 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=103/104 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105 pruub=15.042198181s) [0] r=-1 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 204.632476807s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:03 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:03 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 105 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:03 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 22 05:29:03 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 22 05:29:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:04 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 22 05:29:04 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 22 05:29:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 22 05:29:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 22 05:29:04 compute-0 ceph-mon[75840]: 5.11 scrub starts
Nov 22 05:29:04 compute-0 ceph-mon[75840]: 5.11 scrub ok
Nov 22 05:29:04 compute-0 ceph-mon[75840]: 7.8 scrub starts
Nov 22 05:29:04 compute-0 ceph-mon[75840]: 7.8 scrub ok
Nov 22 05:29:04 compute-0 ceph-mon[75840]: osdmap e105: 3 total, 3 up, 3 in
Nov 22 05:29:04 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 22 05:29:04 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 106 pg[9.16( v 55'578 (0'0,55'578] local-lis/les=105/106 n=6 ec=59/49 lis/c=103/75 les/c/f=104/76/0 sis=105) [0] r=0 lpr=105 pi=[75,105)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:05 compute-0 ceph-mon[75840]: 2.15 scrub starts
Nov 22 05:29:05 compute-0 ceph-mon[75840]: 2.15 scrub ok
Nov 22 05:29:05 compute-0 ceph-mon[75840]: pgmap v226: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:05 compute-0 ceph-mon[75840]: 3.f scrub starts
Nov 22 05:29:05 compute-0 ceph-mon[75840]: 3.f scrub ok
Nov 22 05:29:05 compute-0 ceph-mon[75840]: osdmap e106: 3 total, 3 up, 3 in
Nov 22 05:29:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 3 objects/s recovering
Nov 22 05:29:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 22 05:29:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 05:29:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 22 05:29:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 22 05:29:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 22 05:29:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 05:29:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 05:29:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 22 05:29:06 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 22 05:29:06 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 22 05:29:06 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 22 05:29:06 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 22 05:29:06 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 22 05:29:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:07 compute-0 ceph-mon[75840]: pgmap v228: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 71 B/s, 3 objects/s recovering
Nov 22 05:29:07 compute-0 ceph-mon[75840]: 7.1 scrub starts
Nov 22 05:29:07 compute-0 ceph-mon[75840]: 7.1 scrub ok
Nov 22 05:29:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 05:29:07 compute-0 ceph-mon[75840]: osdmap e107: 3 total, 3 up, 3 in
Nov 22 05:29:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.13 deep-scrub starts
Nov 22 05:29:07 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.13 deep-scrub ok
Nov 22 05:29:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 1 objects/s recovering
Nov 22 05:29:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 22 05:29:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 05:29:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 22 05:29:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 05:29:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 22 05:29:08 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 22 05:29:08 compute-0 ceph-mon[75840]: 5.12 scrub starts
Nov 22 05:29:08 compute-0 ceph-mon[75840]: 5.12 scrub ok
Nov 22 05:29:08 compute-0 ceph-mon[75840]: 7.2 scrub starts
Nov 22 05:29:08 compute-0 ceph-mon[75840]: 7.2 scrub ok
Nov 22 05:29:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 05:29:09 compute-0 ceph-mon[75840]: 5.13 deep-scrub starts
Nov 22 05:29:09 compute-0 ceph-mon[75840]: 5.13 deep-scrub ok
Nov 22 05:29:09 compute-0 ceph-mon[75840]: pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 1 objects/s recovering
Nov 22 05:29:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 05:29:09 compute-0 ceph-mon[75840]: osdmap e108: 3 total, 3 up, 3 in
Nov 22 05:29:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 22 05:29:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 22 05:29:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 05:29:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 22 05:29:10 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 05:29:10 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 05:29:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 22 05:29:10 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 22 05:29:10 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 109 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=15.884453773s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'578 mlcod 0'0 active pruub 224.134216309s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:10 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 109 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=15.884372711s) [2] r=-1 lpr=109 pi=[67,109)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 224.134216309s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:10 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=109) [2] r=0 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 22 05:29:11 compute-0 ceph-mon[75840]: pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 22 05:29:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 05:29:11 compute-0 ceph-mon[75840]: osdmap e109: 3 total, 3 up, 3 in
Nov 22 05:29:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 22 05:29:11 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 22 05:29:11 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 110 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:11 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 110 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=67/68 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:11 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:11 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 22 05:29:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 05:29:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 22 05:29:12 compute-0 ceph-mon[75840]: osdmap e110: 3 total, 3 up, 3 in
Nov 22 05:29:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 05:29:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 05:29:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 22 05:29:12 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 22 05:29:12 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 111 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=67/67 les/c/f=68/68/0 sis=110) [2]/[0] async=[2] r=0 lpr=110 pi=[67,110)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 22 05:29:13 compute-0 ceph-mon[75840]: pgmap v235: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 05:29:13 compute-0 ceph-mon[75840]: osdmap e111: 3 total, 3 up, 3 in
Nov 22 05:29:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 22 05:29:13 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 22 05:29:13 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:13 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:13 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.364536285s) [2] async=[2] r=-1 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 55'578 active pruub 226.624130249s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:13 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 112 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=110/111 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.364456177s) [2] r=-1 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 226.624130249s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 22 05:29:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 22 05:29:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 05:29:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 22 05:29:14 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 22 05:29:14 compute-0 ceph-mon[75840]: osdmap e112: 3 total, 3 up, 3 in
Nov 22 05:29:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 05:29:14 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 113 pg[9.19( v 55'578 (0'0,55'578] local-lis/les=112/113 n=6 ec=59/49 lis/c=110/67 les/c/f=111/68/0 sis=112) [2] r=0 lpr=112 pi=[67,112)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:15 compute-0 ceph-mon[75840]: pgmap v238: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 05:29:15 compute-0 ceph-mon[75840]: osdmap e113: 3 total, 3 up, 3 in
Nov 22 05:29:15 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 22 05:29:15 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 22 05:29:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 22 05:29:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 05:29:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 22 05:29:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 05:29:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 22 05:29:16 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 22 05:29:16 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 114 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=11.955540657s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=55'578 mlcod 0'0 active pruub 214.696533203s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:16 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 114 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114 pruub=11.954920769s) [0] r=-1 lpr=114 pi=[89,114)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 214.696533203s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:16 compute-0 ceph-mon[75840]: 3.7 scrub starts
Nov 22 05:29:16 compute-0 ceph-mon[75840]: 3.7 scrub ok
Nov 22 05:29:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 05:29:16 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 114 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=114) [0] r=0 lpr=114 pi=[89,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:16 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 22 05:29:16 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 22 05:29:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 22 05:29:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 22 05:29:17 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 22 05:29:17 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:17 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[89,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:17 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 115 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:17 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 115 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=89/90 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:17 compute-0 ceph-mon[75840]: pgmap v240: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 05:29:17 compute-0 ceph-mon[75840]: osdmap e114: 3 total, 3 up, 3 in
Nov 22 05:29:17 compute-0 ceph-mon[75840]: 7.9 deep-scrub starts
Nov 22 05:29:17 compute-0 ceph-mon[75840]: 7.9 deep-scrub ok
Nov 22 05:29:17 compute-0 ceph-mon[75840]: osdmap e115: 3 total, 3 up, 3 in
Nov 22 05:29:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 2 objects/s recovering
Nov 22 05:29:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 22 05:29:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 05:29:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 22 05:29:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 22 05:29:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 22 05:29:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 05:29:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 22 05:29:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 22 05:29:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 05:29:18 compute-0 ceph-mon[75840]: 3.c scrub starts
Nov 22 05:29:18 compute-0 ceph-mon[75840]: 3.c scrub ok
Nov 22 05:29:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 05:29:18 compute-0 ceph-mon[75840]: osdmap e116: 3 total, 3 up, 3 in
Nov 22 05:29:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 22 05:29:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 22 05:29:18 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 116 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=89/89 les/c/f=90/90/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[89,115)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 22 05:29:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 22 05:29:19 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 22 05:29:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.647471428s) [0] async=[0] r=-1 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 55'578 active pruub 221.433456421s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:19 compute-0 ceph-mon[75840]: pgmap v243: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 2 objects/s recovering
Nov 22 05:29:19 compute-0 ceph-mon[75840]: 7.5 scrub starts
Nov 22 05:29:19 compute-0 ceph-mon[75840]: 7.5 scrub ok
Nov 22 05:29:19 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=115/116 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117 pruub=15.646136284s) [0] r=-1 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 221.433456421s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:19 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:19 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 117 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:19 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 22 05:29:19 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 22 05:29:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 22 05:29:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 22 05:29:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 05:29:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 22 05:29:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 05:29:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 22 05:29:20 compute-0 ceph-mon[75840]: osdmap e117: 3 total, 3 up, 3 in
Nov 22 05:29:20 compute-0 ceph-mon[75840]: 3.8 scrub starts
Nov 22 05:29:20 compute-0 ceph-mon[75840]: 3.8 scrub ok
Nov 22 05:29:20 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 05:29:20 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 22 05:29:20 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 118 pg[9.1c( v 55'578 (0'0,55'578] local-lis/les=117/118 n=6 ec=59/49 lis/c=115/89 les/c/f=116/90/0 sis=117) [0] r=0 lpr=117 pi=[89,117)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:20 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 118 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=13.508271217s) [0] r=-1 lpr=118 pi=[75,118)/1 crt=55'578 mlcod 0'0 active pruub 220.533447266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:20 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 118 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118 pruub=13.508199692s) [0] r=-1 lpr=118 pi=[75,118)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 220.533447266s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:20 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 118 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=118) [0] r=0 lpr=118 pi=[75,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 22 05:29:21 compute-0 ceph-mon[75840]: pgmap v246: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 22 05:29:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 05:29:21 compute-0 ceph-mon[75840]: osdmap e118: 3 total, 3 up, 3 in
Nov 22 05:29:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 22 05:29:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 22 05:29:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:21 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[75,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 119 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:21 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 119 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=75/76 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 22 05:29:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 22 05:29:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 22 05:29:22 compute-0 ceph-mon[75840]: osdmap e119: 3 total, 3 up, 3 in
Nov 22 05:29:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 22 05:29:22 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 22 05:29:22 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 120 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=75/75 les/c/f=76/76/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[75,119)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:23 compute-0 ceph-mon[75840]: 5.9 scrub starts
Nov 22 05:29:23 compute-0 ceph-mon[75840]: 5.9 scrub ok
Nov 22 05:29:23 compute-0 ceph-mon[75840]: pgmap v249: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:23 compute-0 ceph-mon[75840]: osdmap e120: 3 total, 3 up, 3 in
Nov 22 05:29:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 22 05:29:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 22 05:29:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 22 05:29:23 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.105906487s) [0] async=[0] r=-1 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 55'578 active pruub 224.958618164s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:23 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=119/120 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121 pruub=15.105776787s) [0] r=-1 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 224.958618164s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:23 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 121 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 22 05:29:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 22 05:29:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 22 05:29:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 22 05:29:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 22 05:29:24 compute-0 ceph-mon[75840]: osdmap e121: 3 total, 3 up, 3 in
Nov 22 05:29:24 compute-0 ceph-mon[75840]: 5.16 scrub starts
Nov 22 05:29:24 compute-0 ceph-mon[75840]: 5.16 scrub ok
Nov 22 05:29:24 compute-0 ceph-mon[75840]: pgmap v252: 321 pgs: 1 remapped+peering, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 05:29:24 compute-0 ceph-osd[89779]: osd.0 pg_epoch: 122 pg[9.1e( v 55'578 (0'0,55'578] local-lis/les=121/122 n=6 ec=59/49 lis/c=119/75 les/c/f=120/76/0 sis=121) [0] r=0 lpr=121 pi=[75,121)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:25 compute-0 ceph-mon[75840]: osdmap e122: 3 total, 3 up, 3 in
Nov 22 05:29:25 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 22 05:29:25 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 22 05:29:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 05:29:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 05:29:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:29:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 22 05:29:26 compute-0 ceph-mon[75840]: pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 05:29:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 05:29:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:29:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 22 05:29:26 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 22 05:29:26 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 123 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123 pruub=10.947411537s) [1] r=-1 lpr=123 pi=[78,123)/1 crt=55'578 mlcod 0'0 active pruub 223.843734741s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:26 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 123 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123 pruub=10.947351456s) [1] r=-1 lpr=123 pi=[78,123)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 223.843734741s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:26 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=123) [1] r=0 lpr=123 pi=[78,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 22 05:29:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 22 05:29:27 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 22 05:29:27 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 124 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:27 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 124 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 22 05:29:27 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[78,124)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:27 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[78,124)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 22 05:29:27 compute-0 ceph-mon[75840]: 2.d scrub starts
Nov 22 05:29:27 compute-0 ceph-mon[75840]: 2.d scrub ok
Nov 22 05:29:27 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 05:29:27 compute-0 ceph-mon[75840]: osdmap e123: 3 total, 3 up, 3 in
Nov 22 05:29:27 compute-0 ceph-mon[75840]: osdmap e124: 3 total, 3 up, 3 in
Nov 22 05:29:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 05:29:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 22 05:29:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 22 05:29:28 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 22 05:29:28 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 125 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=78/78 les/c/f=79/79/0 sis=124) [1]/[2] async=[1] r=0 lpr=124 pi=[78,124)/1 crt=55'578 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 22 05:29:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 22 05:29:28 compute-0 ceph-mon[75840]: 7.6 scrub starts
Nov 22 05:29:28 compute-0 ceph-mon[75840]: 7.6 scrub ok
Nov 22 05:29:28 compute-0 ceph-mon[75840]: pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Nov 22 05:29:28 compute-0 ceph-mon[75840]: osdmap e125: 3 total, 3 up, 3 in
Nov 22 05:29:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 22 05:29:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 22 05:29:29 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 22 05:29:29 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126 pruub=15.003382683s) [1] async=[1] r=-1 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 55'578 active pruub 230.555160522s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:29 compute-0 ceph-osd[91881]: osd.2 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=124/125 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126 pruub=15.003293991s) [1] r=-1 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 unknown NOTIFY pruub 230.555160522s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 05:29:29 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 luod=0'0 crt=55'578 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 05:29:29 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 126 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=0/0 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 05:29:29 compute-0 ceph-mon[75840]: 3.3 scrub starts
Nov 22 05:29:29 compute-0 ceph-mon[75840]: 3.3 scrub ok
Nov 22 05:29:29 compute-0 ceph-mon[75840]: osdmap e126: 3 total, 3 up, 3 in
Nov 22 05:29:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 22 05:29:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 22 05:29:30 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 22 05:29:30 compute-0 ceph-osd[90784]: osd.1 pg_epoch: 127 pg[9.1f( v 55'578 (0'0,55'578] local-lis/les=126/127 n=6 ec=59/49 lis/c=124/78 les/c/f=125/79/0 sis=126) [1] r=0 lpr=126 pi=[78,126)/1 crt=55'578 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 05:29:30 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 22 05:29:30 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 22 05:29:31 compute-0 ceph-mon[75840]: pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:31 compute-0 ceph-mon[75840]: osdmap e127: 3 total, 3 up, 3 in
Nov 22 05:29:31 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Nov 22 05:29:31 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Nov 22 05:29:31 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 22 05:29:31 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 22 05:29:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 22 05:29:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 7.c scrub starts
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 7.c scrub ok
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 2.5 deep-scrub starts
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 2.5 deep-scrub ok
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 7.e scrub starts
Nov 22 05:29:32 compute-0 ceph-mon[75840]: 7.e scrub ok
Nov 22 05:29:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 22 05:29:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 22 05:29:33 compute-0 ceph-mon[75840]: pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 22 05:29:33 compute-0 ceph-mon[75840]: 2.7 scrub starts
Nov 22 05:29:33 compute-0 ceph-mon[75840]: 2.7 scrub ok
Nov 22 05:29:33 compute-0 sudo[109275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:33 compute-0 sudo[109275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:33 compute-0 sudo[109275]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:33 compute-0 sudo[109300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:29:33 compute-0 sudo[109300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:33 compute-0 sudo[109300]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:33 compute-0 sudo[109325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:33 compute-0 sudo[109325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:33 compute-0 sudo[109325]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:33 compute-0 sudo[109350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:29:33 compute-0 sudo[109350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 22 05:29:34 compute-0 sudo[109350]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0a6a9f28-9990-4d76-b588-b335d3e966d9 does not exist
Nov 22 05:29:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ec831de7-02dc-4eb6-896c-364d0af5e3f1 does not exist
Nov 22 05:29:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f6b83453-3c9e-4a0a-9995-ec4b66ed4069 does not exist
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:29:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:29:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:29:34 compute-0 sudo[109406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:34 compute-0 sudo[109406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:34 compute-0 sudo[109406]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:34 compute-0 sudo[109431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:29:34 compute-0 sudo[109431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:34 compute-0 sudo[109431]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:34 compute-0 sudo[109456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:34 compute-0 sudo[109456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:34 compute-0 sudo[109456]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:34 compute-0 sudo[109481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:29:34 compute-0 sudo[109481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:34 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 22 05:29:34 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 22 05:29:34 compute-0 podman[109544]: 2025-11-22 05:29:34.974676047 +0000 UTC m=+0.074268627 container create 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:29:35 compute-0 systemd[1]: Started libpod-conmon-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope.
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:34.94608034 +0000 UTC m=+0.045672910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:35.075919557 +0000 UTC m=+0.175512187 container init 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:35.0847654 +0000 UTC m=+0.184357990 container start 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:29:35 compute-0 magical_rhodes[109561]: 167 167
Nov 22 05:29:35 compute-0 systemd[1]: libpod-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope: Deactivated successfully.
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:35.089959228 +0000 UTC m=+0.189551818 container attach 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:35.090869712 +0000 UTC m=+0.190462292 container died 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:29:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-24a888ee51b65521e24e48a1f493dbed213c7ab6a85ae92c6e2946fe9aaac14d-merged.mount: Deactivated successfully.
Nov 22 05:29:35 compute-0 ceph-mon[75840]: pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:29:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:29:35 compute-0 ceph-mon[75840]: 3.1d scrub starts
Nov 22 05:29:35 compute-0 ceph-mon[75840]: 3.1d scrub ok
Nov 22 05:29:35 compute-0 podman[109544]: 2025-11-22 05:29:35.15728551 +0000 UTC m=+0.256878070 container remove 7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_rhodes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:29:35 compute-0 systemd[1]: libpod-conmon-7904e72204867092a22923123de5ff9ffeb9bba1cc6d5c75078badafad9eec6d.scope: Deactivated successfully.
Nov 22 05:29:35 compute-0 podman[109587]: 2025-11-22 05:29:35.344514144 +0000 UTC m=+0.026852991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:35 compute-0 podman[109587]: 2025-11-22 05:29:35.441723447 +0000 UTC m=+0.124062294 container create 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:29:35 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 22 05:29:35 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 22 05:29:35 compute-0 systemd[1]: Started libpod-conmon-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope.
Nov 22 05:29:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:35 compute-0 podman[109587]: 2025-11-22 05:29:35.638754872 +0000 UTC m=+0.321093829 container init 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:29:35 compute-0 podman[109587]: 2025-11-22 05:29:35.649728432 +0000 UTC m=+0.332067299 container start 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:29:35 compute-0 podman[109587]: 2025-11-22 05:29:35.654720724 +0000 UTC m=+0.337059671 container attach 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:29:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 22 05:29:35 compute-0 sudo[109128]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:36 compute-0 sshd-session[109601]: Invalid user shred from 80.94.92.166 port 46088
Nov 22 05:29:36 compute-0 ceph-mon[75840]: 7.1a scrub starts
Nov 22 05:29:36 compute-0 ceph-mon[75840]: 7.1a scrub ok
Nov 22 05:29:36 compute-0 sshd-session[109601]: Connection closed by invalid user shred 80.94.92.166 port 46088 [preauth]
Nov 22 05:29:36 compute-0 sudo[109770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqpbrlayoywelramyxljlhglbsdxhar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789376.1793041-137-230449677254184/AnsiballZ_command.py'
Nov 22 05:29:36 compute-0 sudo[109770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:36 compute-0 python3.9[109773]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:29:36 compute-0 stupefied_chatelet[109605]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:29:36 compute-0 stupefied_chatelet[109605]: --> relative data size: 1.0
Nov 22 05:29:36 compute-0 stupefied_chatelet[109605]: --> All data devices are unavailable
Nov 22 05:29:36 compute-0 systemd[1]: libpod-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Deactivated successfully.
Nov 22 05:29:36 compute-0 systemd[1]: libpod-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Consumed 1.056s CPU time.
Nov 22 05:29:36 compute-0 podman[109587]: 2025-11-22 05:29:36.781803542 +0000 UTC m=+1.464142439 container died 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:29:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a004550fa6207d3e90ddc090d3497b923cd7ac9874baa0eeae3351d7871c890-merged.mount: Deactivated successfully.
Nov 22 05:29:36 compute-0 podman[109587]: 2025-11-22 05:29:36.887326195 +0000 UTC m=+1.569665022 container remove 004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:29:36 compute-0 systemd[1]: libpod-conmon-004b22e6896a8156b06ee369db7eeaa538b93ea770f196336dd44c1a5f1bce6a.scope: Deactivated successfully.
Nov 22 05:29:36 compute-0 sudo[109481]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:37 compute-0 sudo[109807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:37 compute-0 sudo[109807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:37 compute-0 sudo[109807]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:37 compute-0 sudo[109832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:29:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:37 compute-0 sudo[109832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:37 compute-0 sudo[109832]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:37 compute-0 ceph-mon[75840]: pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 22 05:29:37 compute-0 sudo[109857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:37 compute-0 sudo[109857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:37 compute-0 sudo[109857]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:37 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 22 05:29:37 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 22 05:29:37 compute-0 sudo[109933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:29:37 compute-0 sudo[109933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:37 compute-0 sudo[109770]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 22 05:29:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.69575439 +0000 UTC m=+0.047613381 container create 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:29:37 compute-0 systemd[1]: Started libpod-conmon-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope.
Nov 22 05:29:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 22 05:29:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.676185082 +0000 UTC m=+0.028044073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.79175248 +0000 UTC m=+0.143611531 container init 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.803532683 +0000 UTC m=+0.155391684 container start 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.807464566 +0000 UTC m=+0.159323567 container attach 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:29:37 compute-0 friendly_spence[110140]: 167 167
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.809652814 +0000 UTC m=+0.161511775 container died 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:29:37 compute-0 systemd[1]: libpod-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope: Deactivated successfully.
Nov 22 05:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38034dfe3b8c006b790edeb155770b81b1093b83420de3ddc0be47e32c593ee-merged.mount: Deactivated successfully.
Nov 22 05:29:37 compute-0 podman[110101]: 2025-11-22 05:29:37.851466481 +0000 UTC m=+0.203325462 container remove 17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_spence, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:29:37 compute-0 systemd[1]: libpod-conmon-17462abb2ae009f360812ec5fb826cc5eca7fbd36c79722b11b168a5253bbf82.scope: Deactivated successfully.
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.00487101 +0000 UTC m=+0.050072165 container create 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:29:38 compute-0 systemd[1]: Started libpod-conmon-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope.
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:37.983032613 +0000 UTC m=+0.028233808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.101027555 +0000 UTC m=+0.146228760 container init 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.116390002 +0000 UTC m=+0.161591157 container start 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.120019579 +0000 UTC m=+0.165220814 container attach 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:29:38 compute-0 ceph-mon[75840]: 7.4 scrub starts
Nov 22 05:29:38 compute-0 ceph-mon[75840]: 7.4 scrub ok
Nov 22 05:29:38 compute-0 ceph-mon[75840]: 3.1e scrub starts
Nov 22 05:29:38 compute-0 ceph-mon[75840]: 3.1e scrub ok
Nov 22 05:29:38 compute-0 sudo[110287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvebhugajxjxxmcfsjwpmzvtfnqcwkeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789377.7512512-145-99337037798275/AnsiballZ_selinux.py'
Nov 22 05:29:38 compute-0 sudo[110287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:38 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 22 05:29:38 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 22 05:29:38 compute-0 python3.9[110289]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 05:29:38 compute-0 sudo[110287]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]: {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     "0": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "devices": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "/dev/loop3"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             ],
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_name": "ceph_lv0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_size": "21470642176",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "name": "ceph_lv0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "tags": {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_name": "ceph",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.crush_device_class": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.encrypted": "0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_id": "0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.vdo": "0"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             },
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "vg_name": "ceph_vg0"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         }
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     ],
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     "1": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "devices": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "/dev/loop4"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             ],
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_name": "ceph_lv1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_size": "21470642176",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "name": "ceph_lv1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "tags": {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_name": "ceph",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.crush_device_class": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.encrypted": "0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_id": "1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.vdo": "0"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             },
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "vg_name": "ceph_vg1"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         }
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     ],
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     "2": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "devices": [
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "/dev/loop5"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             ],
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_name": "ceph_lv2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_size": "21470642176",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "name": "ceph_lv2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "tags": {
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.cluster_name": "ceph",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.crush_device_class": "",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.encrypted": "0",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osd_id": "2",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:                 "ceph.vdo": "0"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             },
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "type": "block",
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:             "vg_name": "ceph_vg2"
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:         }
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]:     ]
Nov 22 05:29:38 compute-0 youthful_dijkstra[110209]: }
Nov 22 05:29:38 compute-0 systemd[1]: libpod-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope: Deactivated successfully.
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.906977095 +0000 UTC m=+0.952178270 container died 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-09a7cee42742036df9ac1c764dcc89f75c296f6d315cabcdfafa6b6a0036be68-merged.mount: Deactivated successfully.
Nov 22 05:29:38 compute-0 podman[110193]: 2025-11-22 05:29:38.966077239 +0000 UTC m=+1.011278374 container remove 6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:29:38 compute-0 systemd[1]: libpod-conmon-6d7e19cdc34a4c10c310f5bb9fdec67d7e324821745fabb4b780e442e82d8a71.scope: Deactivated successfully.
Nov 22 05:29:38 compute-0 sudo[109933]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:39 compute-0 sudo[110330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:39 compute-0 sudo[110330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:39 compute-0 sudo[110330]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:39 compute-0 sudo[110355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:29:39 compute-0 sudo[110355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:39 compute-0 sudo[110355]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:39 compute-0 sudo[110403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:39 compute-0 sudo[110403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:39 compute-0 sudo[110403]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:39 compute-0 ceph-mon[75840]: pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 22 05:29:39 compute-0 ceph-mon[75840]: 3.e scrub starts
Nov 22 05:29:39 compute-0 ceph-mon[75840]: 3.e scrub ok
Nov 22 05:29:39 compute-0 sudo[110457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:29:39 compute-0 sudo[110457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:39 compute-0 sudo[110568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjvzaghdzwttoiweilxibolnokssxci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789379.120813-156-60002985283099/AnsiballZ_command.py'
Nov 22 05:29:39 compute-0 sudo[110568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:39 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 22 05:29:39 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 22 05:29:39 compute-0 python3.9[110577]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 05:29:39 compute-0 sudo[110568]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.688916559 +0000 UTC m=+0.066700836 container create 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:29:39 compute-0 systemd[1]: Started libpod-conmon-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope.
Nov 22 05:29:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.660376164 +0000 UTC m=+0.038160531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.78796804 +0000 UTC m=+0.165752397 container init 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.795340345 +0000 UTC m=+0.173124662 container start 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.799713641 +0000 UTC m=+0.177498008 container attach 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:29:39 compute-0 dreamy_curie[110613]: 167 167
Nov 22 05:29:39 compute-0 systemd[1]: libpod-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope: Deactivated successfully.
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.801823067 +0000 UTC m=+0.179607374 container died 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f31e4cb8efbabb6ebc1ccc487eb0cc13cea06e1e777ef9292dd787303589a9b-merged.mount: Deactivated successfully.
Nov 22 05:29:39 compute-0 podman[110597]: 2025-11-22 05:29:39.853962547 +0000 UTC m=+0.231746854 container remove 3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:29:39 compute-0 systemd[1]: libpod-conmon-3c9dcb903f135a028265b992762ef170cd998401ded06a39252d869725411386.scope: Deactivated successfully.
Nov 22 05:29:40 compute-0 podman[110691]: 2025-11-22 05:29:40.055756647 +0000 UTC m=+0.041336854 container create 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:29:40 compute-0 systemd[1]: Started libpod-conmon-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope.
Nov 22 05:29:40 compute-0 podman[110691]: 2025-11-22 05:29:40.040970736 +0000 UTC m=+0.026550963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:29:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:29:40 compute-0 podman[110691]: 2025-11-22 05:29:40.168895352 +0000 UTC m=+0.154475659 container init 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:29:40 compute-0 podman[110691]: 2025-11-22 05:29:40.183498838 +0000 UTC m=+0.169079085 container start 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:29:40 compute-0 podman[110691]: 2025-11-22 05:29:40.189805745 +0000 UTC m=+0.175385972 container attach 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:29:40 compute-0 ceph-mon[75840]: 3.5 scrub starts
Nov 22 05:29:40 compute-0 ceph-mon[75840]: 3.5 scrub ok
Nov 22 05:29:40 compute-0 sudo[110810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syyjgsgdobqetxmkdnumkcjvopyqbuzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789379.9818683-164-248961078956364/AnsiballZ_file.py'
Nov 22 05:29:40 compute-0 sudo[110810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:40 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.6 deep-scrub starts
Nov 22 05:29:40 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.6 deep-scrub ok
Nov 22 05:29:40 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 22 05:29:40 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 22 05:29:40 compute-0 python3.9[110812]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:29:40 compute-0 sudo[110810]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:41 compute-0 ceph-mon[75840]: pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 22 05:29:41 compute-0 ceph-mon[75840]: 3.6 deep-scrub starts
Nov 22 05:29:41 compute-0 ceph-mon[75840]: 3.6 deep-scrub ok
Nov 22 05:29:41 compute-0 ceph-mon[75840]: 2.4 scrub starts
Nov 22 05:29:41 compute-0 ceph-mon[75840]: 2.4 scrub ok
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]: {
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:29:41 compute-0 sudo[110988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlkmolujszggwxnqignfwhkddfwawvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789380.7219458-172-11721009384045/AnsiballZ_mount.py'
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_id": 1,
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "type": "bluestore"
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     },
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_id": 2,
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "type": "bluestore"
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     },
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_id": 0,
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:         "type": "bluestore"
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]:     }
Nov 22 05:29:41 compute-0 elegant_blackwell[110755]: }
Nov 22 05:29:41 compute-0 sudo[110988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:41 compute-0 systemd[1]: libpod-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Deactivated successfully.
Nov 22 05:29:41 compute-0 systemd[1]: libpod-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Consumed 1.108s CPU time.
Nov 22 05:29:41 compute-0 podman[110691]: 2025-11-22 05:29:41.302068091 +0000 UTC m=+1.287648308 container died 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 05:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-639ca2d394e00ba2d6227323e4a7d0d808cc69f364d1c2ae495234d36e0d0b10-merged.mount: Deactivated successfully.
Nov 22 05:29:41 compute-0 podman[110691]: 2025-11-22 05:29:41.382213983 +0000 UTC m=+1.367794230 container remove 0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:29:41 compute-0 systemd[1]: libpod-conmon-0136d8952b7aca2871e34c810a5f077f9d96feac2b5fe1eb7bca57e2720aa52c.scope: Deactivated successfully.
Nov 22 05:29:41 compute-0 sudo[110457]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:29:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:29:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d20db80c-3c03-434c-b581-93671437425d does not exist
Nov 22 05:29:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev b6fa1616-e14d-4d75-b719-97f879348504 does not exist
Nov 22 05:29:41 compute-0 sudo[111007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:29:41 compute-0 sudo[111007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:41 compute-0 sudo[111007]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:41 compute-0 python3.9[110992]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 05:29:41 compute-0 sudo[110988]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:41 compute-0 sudo[111032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:29:41 compute-0 sudo[111032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:29:41 compute-0 sudo[111032]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 22 05:29:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 22 05:29:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 22 05:29:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:29:42 compute-0 ceph-mon[75840]: pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 22 05:29:42 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.3 deep-scrub starts
Nov 22 05:29:42 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.3 deep-scrub ok
Nov 22 05:29:42 compute-0 sudo[111206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrfgjfcvfsqluvmdgpcwbcygexovrus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789382.2209957-200-67988140844610/AnsiballZ_file.py'
Nov 22 05:29:42 compute-0 sudo[111206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:42 compute-0 python3.9[111208]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:29:42 compute-0 sudo[111206]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:43 compute-0 sshd-session[111285]: Connection closed by 80.94.92.182 port 54904
Nov 22 05:29:43 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 22 05:29:43 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 22 05:29:43 compute-0 sudo[111359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npbghvhwtrfyunovcjbivaeketmnivtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789383.1084805-208-261112612711555/AnsiballZ_stat.py'
Nov 22 05:29:43 compute-0 sudo[111359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:43 compute-0 ceph-mon[75840]: 7.3 scrub starts
Nov 22 05:29:43 compute-0 ceph-mon[75840]: 7.3 scrub ok
Nov 22 05:29:43 compute-0 ceph-mon[75840]: 10.3 deep-scrub starts
Nov 22 05:29:43 compute-0 ceph-mon[75840]: 10.3 deep-scrub ok
Nov 22 05:29:43 compute-0 python3.9[111361]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:29:43
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:29:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:29:44 compute-0 ceph-mon[75840]: 2.6 scrub starts
Nov 22 05:29:44 compute-0 ceph-mon[75840]: 2.6 scrub ok
Nov 22 05:29:44 compute-0 ceph-mon[75840]: pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:44 compute-0 sudo[111359]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:45 compute-0 sudo[111437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpqeqpmryngyyewbxzqhkflmgjesmws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789383.1084805-208-261112612711555/AnsiballZ_file.py'
Nov 22 05:29:45 compute-0 sudo[111437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:45 compute-0 python3.9[111439]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:29:45 compute-0 sudo[111437]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 22 05:29:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 22 05:29:45 compute-0 ceph-mon[75840]: 5.f scrub starts
Nov 22 05:29:45 compute-0 ceph-mon[75840]: 5.f scrub ok
Nov 22 05:29:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:46 compute-0 sudo[111589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iunlaysvpoyofqykfyupxjtnxasmfmuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789385.808534-229-138975187201017/AnsiballZ_stat.py'
Nov 22 05:29:46 compute-0 sudo[111589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Nov 22 05:29:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Nov 22 05:29:46 compute-0 python3.9[111591]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:29:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 05:29:46 compute-0 sudo[111589]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 05:29:46 compute-0 ceph-mon[75840]: pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:47 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 22 05:29:47 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 22 05:29:47 compute-0 sudo[111743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njsqpiygggarbyrifpnhyiedhboxqnbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789387.0023615-242-19626312208673/AnsiballZ_getent.py'
Nov 22 05:29:47 compute-0 sudo[111743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:47 compute-0 ceph-mon[75840]: 3.9 deep-scrub starts
Nov 22 05:29:47 compute-0 ceph-mon[75840]: 3.9 deep-scrub ok
Nov 22 05:29:47 compute-0 ceph-mon[75840]: 10.5 scrub starts
Nov 22 05:29:47 compute-0 ceph-mon[75840]: 10.5 scrub ok
Nov 22 05:29:47 compute-0 python3.9[111745]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 05:29:47 compute-0 sudo[111743]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:48 compute-0 sudo[111896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaeuijlnxrutnlpewkmsyvtdnqzpseaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789387.9920058-252-71260964306025/AnsiballZ_getent.py'
Nov 22 05:29:48 compute-0 sudo[111896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:48 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 22 05:29:48 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 22 05:29:48 compute-0 python3.9[111898]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 05:29:48 compute-0 sudo[111896]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:48 compute-0 ceph-mon[75840]: 3.a scrub starts
Nov 22 05:29:48 compute-0 ceph-mon[75840]: 3.a scrub ok
Nov 22 05:29:48 compute-0 ceph-mon[75840]: pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:49 compute-0 sudo[112049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtfzcfukzsgnthowejztvutfswsoipxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789388.767016-260-132099823031529/AnsiballZ_group.py'
Nov 22 05:29:49 compute-0 sudo[112049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:49 compute-0 python3.9[112051]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:29:49 compute-0 sudo[112049]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:49 compute-0 ceph-mon[75840]: 7.1f scrub starts
Nov 22 05:29:49 compute-0 ceph-mon[75840]: 7.1f scrub ok
Nov 22 05:29:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:50 compute-0 sudo[112201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egujxfgniaasacisnbtmiwrgmcizmofq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789389.8771715-269-201837548969703/AnsiballZ_file.py'
Nov 22 05:29:50 compute-0 sudo[112201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:50 compute-0 python3.9[112203]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 05:29:50 compute-0 sudo[112201]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:50 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Nov 22 05:29:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 22 05:29:50 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Nov 22 05:29:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 22 05:29:50 compute-0 ceph-mon[75840]: pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:50 compute-0 ceph-mon[75840]: 10.a deep-scrub starts
Nov 22 05:29:50 compute-0 ceph-mon[75840]: 2.3 scrub starts
Nov 22 05:29:50 compute-0 ceph-mon[75840]: 10.a deep-scrub ok
Nov 22 05:29:50 compute-0 ceph-mon[75840]: 2.3 scrub ok
Nov 22 05:29:51 compute-0 sudo[112353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngidztwenmxudlqndyztzfzkodeuoozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789390.8066218-280-153942316113521/AnsiballZ_dnf.py'
Nov 22 05:29:51 compute-0 sudo[112353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 05:29:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 05:29:51 compute-0 python3.9[112355]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:29:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 22 05:29:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 22 05:29:51 compute-0 ceph-mon[75840]: 10.c scrub starts
Nov 22 05:29:51 compute-0 ceph-mon[75840]: 10.c scrub ok
Nov 22 05:29:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:52 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 05:29:52 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:29:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:29:52 compute-0 ceph-mon[75840]: 2.9 scrub starts
Nov 22 05:29:52 compute-0 ceph-mon[75840]: 2.9 scrub ok
Nov 22 05:29:52 compute-0 ceph-mon[75840]: pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:52 compute-0 ceph-mon[75840]: 10.18 scrub starts
Nov 22 05:29:52 compute-0 ceph-mon[75840]: 10.18 scrub ok
Nov 22 05:29:52 compute-0 sudo[112353]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:53 compute-0 sudo[112506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqorggbsfhykdgscfeqtswrkxvgjpjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789392.995491-288-225141094997921/AnsiballZ_file.py'
Nov 22 05:29:53 compute-0 sudo[112506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:53 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 22 05:29:53 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 22 05:29:53 compute-0 python3.9[112508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:29:53 compute-0 sudo[112506]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:54 compute-0 sudo[112658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ievkjcbfwfgojkdkxbxcifavdafeibdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789393.8965936-296-121721140179498/AnsiballZ_stat.py'
Nov 22 05:29:54 compute-0 sudo[112658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:54 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 22 05:29:54 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 22 05:29:54 compute-0 python3.9[112660]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:29:54 compute-0 sudo[112658]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:54 compute-0 ceph-mon[75840]: 2.a scrub starts
Nov 22 05:29:54 compute-0 ceph-mon[75840]: 2.a scrub ok
Nov 22 05:29:54 compute-0 ceph-mon[75840]: pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:54 compute-0 sudo[112736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tganincvntxtzkrnotxrpdtsjmilyjjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789393.8965936-296-121721140179498/AnsiballZ_file.py'
Nov 22 05:29:54 compute-0 sudo[112736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:55 compute-0 python3.9[112738]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:29:55 compute-0 sudo[112736]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:55 compute-0 sudo[112888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqthspopiljqtrtdpekzgiamjposdpxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789395.312847-309-206621748434778/AnsiballZ_stat.py'
Nov 22 05:29:55 compute-0 sudo[112888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:55 compute-0 ceph-mon[75840]: 7.18 scrub starts
Nov 22 05:29:55 compute-0 ceph-mon[75840]: 7.18 scrub ok
Nov 22 05:29:55 compute-0 python3.9[112890]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:29:55 compute-0 sudo[112888]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:56 compute-0 sudo[112966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjsecsfuhvaunguvbstsrlsbzbjyhxyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789395.312847-309-206621748434778/AnsiballZ_file.py'
Nov 22 05:29:56 compute-0 sudo[112966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:56 compute-0 python3.9[112968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:29:56 compute-0 sudo[112966]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:56 compute-0 ceph-mon[75840]: pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:29:57 compute-0 systemd[77455]: Created slice User Background Tasks Slice.
Nov 22 05:29:57 compute-0 sudo[113118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snbrubttmcxmncfntkivijzqufqfioxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789396.7543864-324-163000850350290/AnsiballZ_dnf.py'
Nov 22 05:29:57 compute-0 systemd[77455]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 05:29:57 compute-0 sudo[113118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:29:57 compute-0 systemd[77455]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 05:29:57 compute-0 python3.9[113121]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:29:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 05:29:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 05:29:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 22 05:29:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 22 05:29:58 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 05:29:58 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 05:29:58 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 22 05:29:58 compute-0 sudo[113118]: pam_unix(sudo:session): session closed for user root
Nov 22 05:29:58 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 22 05:29:58 compute-0 ceph-mon[75840]: 5.1a scrub starts
Nov 22 05:29:58 compute-0 ceph-mon[75840]: 5.1a scrub ok
Nov 22 05:29:58 compute-0 ceph-mon[75840]: pgmap v275: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:58 compute-0 ceph-mon[75840]: 10.1b scrub starts
Nov 22 05:29:58 compute-0 ceph-mon[75840]: 10.1b scrub ok
Nov 22 05:29:59 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 05:29:59 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 05:29:59 compute-0 python3.9[113272]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:29:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 3.1b deep-scrub starts
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 3.1b deep-scrub ok
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 5.19 scrub starts
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 5.19 scrub ok
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 10.1c scrub starts
Nov 22 05:29:59 compute-0 ceph-mon[75840]: 10.1c scrub ok
Nov 22 05:30:00 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 22 05:30:00 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 22 05:30:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 22 05:30:00 compute-0 python3.9[113424]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 05:30:00 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 22 05:30:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 05:30:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 05:30:00 compute-0 ceph-mon[75840]: pgmap v276: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:00 compute-0 ceph-mon[75840]: 10.1d scrub starts
Nov 22 05:30:00 compute-0 ceph-mon[75840]: 10.1d scrub ok
Nov 22 05:30:01 compute-0 python3.9[113574]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:30:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:01 compute-0 ceph-mon[75840]: 7.1b scrub starts
Nov 22 05:30:01 compute-0 ceph-mon[75840]: 7.1b scrub ok
Nov 22 05:30:01 compute-0 ceph-mon[75840]: 5.c scrub starts
Nov 22 05:30:01 compute-0 ceph-mon[75840]: 5.c scrub ok
Nov 22 05:30:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:02 compute-0 sudo[113724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nponyahywwvnkryrukbiagvrdxogtwzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789401.4933934-365-178346957694391/AnsiballZ_systemd.py'
Nov 22 05:30:02 compute-0 sudo[113724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:02 compute-0 python3.9[113726]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:30:02 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 05:30:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 05:30:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 05:30:02 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 05:30:02 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 05:30:02 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 05:30:02 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 05:30:02 compute-0 ceph-mon[75840]: pgmap v277: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:02 compute-0 sudo[113724]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:03 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 22 05:30:03 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 22 05:30:03 compute-0 python3.9[113888]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 05:30:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:03 compute-0 ceph-mon[75840]: 5.18 scrub starts
Nov 22 05:30:03 compute-0 ceph-mon[75840]: 5.18 scrub ok
Nov 22 05:30:04 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 22 05:30:04 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 22 05:30:04 compute-0 ceph-mon[75840]: 7.f scrub starts
Nov 22 05:30:04 compute-0 ceph-mon[75840]: 7.f scrub ok
Nov 22 05:30:04 compute-0 ceph-mon[75840]: pgmap v278: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:05 compute-0 sudo[114038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujrrfepfigqknnalylmwirvjxtudgse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789405.0962045-422-223265639919927/AnsiballZ_systemd.py'
Nov 22 05:30:05 compute-0 sudo[114038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:05 compute-0 python3.9[114040]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:30:05 compute-0 sudo[114038]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:05 compute-0 ceph-mon[75840]: 2.b scrub starts
Nov 22 05:30:05 compute-0 ceph-mon[75840]: 2.b scrub ok
Nov 22 05:30:06 compute-0 sudo[114192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkifzrxntulygllanppxmsfmlbhjhszw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789405.9297967-422-20216438532766/AnsiballZ_systemd.py'
Nov 22 05:30:06 compute-0 sudo[114192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:06 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 05:30:06 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 05:30:06 compute-0 python3.9[114194]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:30:06 compute-0 sudo[114192]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:06 compute-0 ceph-mon[75840]: pgmap v279: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:06 compute-0 ceph-mon[75840]: 10.1f deep-scrub starts
Nov 22 05:30:06 compute-0 ceph-mon[75840]: 10.1f deep-scrub ok
Nov 22 05:30:07 compute-0 sshd-session[107216]: Connection closed by 192.168.122.30 port 44616
Nov 22 05:30:07 compute-0 sshd-session[107213]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:30:07 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Nov 22 05:30:07 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 22 05:30:07 compute-0 systemd[1]: session-34.scope: Consumed 1min 5.986s CPU time.
Nov 22 05:30:07 compute-0 systemd-logind[798]: Removed session 34.
Nov 22 05:30:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:08 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 05:30:08 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 05:30:08 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 22 05:30:08 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 22 05:30:08 compute-0 ceph-mon[75840]: pgmap v280: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:08 compute-0 ceph-mon[75840]: 11.15 scrub starts
Nov 22 05:30:08 compute-0 ceph-mon[75840]: 11.15 scrub ok
Nov 22 05:30:09 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 22 05:30:09 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 22 05:30:09 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 05:30:09 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 05:30:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:09 compute-0 ceph-mon[75840]: 3.1f scrub starts
Nov 22 05:30:09 compute-0 ceph-mon[75840]: 3.1f scrub ok
Nov 22 05:30:09 compute-0 ceph-mon[75840]: 11.2 scrub starts
Nov 22 05:30:09 compute-0 ceph-mon[75840]: 11.2 scrub ok
Nov 22 05:30:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 22 05:30:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 22 05:30:10 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 22 05:30:10 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 22 05:30:10 compute-0 ceph-mon[75840]: 8.1 scrub starts
Nov 22 05:30:10 compute-0 ceph-mon[75840]: 8.1 scrub ok
Nov 22 05:30:10 compute-0 ceph-mon[75840]: pgmap v281: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:11 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 22 05:30:11 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 22 05:30:11 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 22 05:30:11 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 22 05:30:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:11 compute-0 sshd-session[114221]: Accepted publickey for zuul from 192.168.122.30 port 41580 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:30:11 compute-0 systemd-logind[798]: New session 35 of user zuul.
Nov 22 05:30:11 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 22 05:30:11 compute-0 sshd-session[114221]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 10.9 scrub starts
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 10.9 scrub ok
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 8.3 scrub starts
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 8.3 scrub ok
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 8.d scrub starts
Nov 22 05:30:11 compute-0 ceph-mon[75840]: 8.d scrub ok
Nov 22 05:30:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 05:30:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 05:30:12 compute-0 ceph-mon[75840]: 8.5 deep-scrub starts
Nov 22 05:30:12 compute-0 ceph-mon[75840]: 8.5 deep-scrub ok
Nov 22 05:30:12 compute-0 ceph-mon[75840]: pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:12 compute-0 ceph-mon[75840]: 8.15 scrub starts
Nov 22 05:30:12 compute-0 ceph-mon[75840]: 8.15 scrub ok
Nov 22 05:30:13 compute-0 python3.9[114374]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:13 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 05:30:13 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 05:30:13 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 22 05:30:13 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:13 compute-0 ceph-mon[75840]: 11.d scrub starts
Nov 22 05:30:13 compute-0 ceph-mon[75840]: 11.d scrub ok
Nov 22 05:30:14 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 05:30:14 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 05:30:14 compute-0 sudo[114528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfeahpqxqqdyapqpboxxojtqsfixmpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789413.8201473-36-181076840215364/AnsiballZ_getent.py'
Nov 22 05:30:14 compute-0 sudo[114528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:14 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 22 05:30:14 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 22 05:30:14 compute-0 python3.9[114530]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 05:30:14 compute-0 sudo[114528]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:14 compute-0 ceph-mon[75840]: 10.8 scrub starts
Nov 22 05:30:14 compute-0 ceph-mon[75840]: 10.8 scrub ok
Nov 22 05:30:14 compute-0 ceph-mon[75840]: pgmap v283: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:14 compute-0 ceph-mon[75840]: 8.2 scrub starts
Nov 22 05:30:14 compute-0 ceph-mon[75840]: 8.2 scrub ok
Nov 22 05:30:15 compute-0 sudo[114681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fholcwryjsxxichqkbcbsqghjzqcppbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789414.935959-48-40045634910872/AnsiballZ_setup.py'
Nov 22 05:30:15 compute-0 sudo[114681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:15 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 22 05:30:15 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 22 05:30:15 compute-0 python3.9[114683]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:30:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:15 compute-0 sudo[114681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:15 compute-0 ceph-mon[75840]: 10.15 scrub starts
Nov 22 05:30:15 compute-0 ceph-mon[75840]: 10.15 scrub ok
Nov 22 05:30:15 compute-0 ceph-mon[75840]: 11.3 scrub starts
Nov 22 05:30:15 compute-0 ceph-mon[75840]: 11.3 scrub ok
Nov 22 05:30:16 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 05:30:16 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 05:30:16 compute-0 sudo[114765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlqmzrglmyafiopnielhzdzuaggckwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789414.935959-48-40045634910872/AnsiballZ_dnf.py'
Nov 22 05:30:16 compute-0 sudo[114765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:16 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 22 05:30:16 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 22 05:30:16 compute-0 python3.9[114767]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 05:30:16 compute-0 ceph-mon[75840]: pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:16 compute-0 ceph-mon[75840]: 11.b scrub starts
Nov 22 05:30:16 compute-0 ceph-mon[75840]: 11.b scrub ok
Nov 22 05:30:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 05:30:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 05:30:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 22 05:30:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 22 05:30:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:18 compute-0 ceph-mon[75840]: 10.7 deep-scrub starts
Nov 22 05:30:18 compute-0 ceph-mon[75840]: 10.7 deep-scrub ok
Nov 22 05:30:18 compute-0 ceph-mon[75840]: 11.9 scrub starts
Nov 22 05:30:18 compute-0 ceph-mon[75840]: 11.9 scrub ok
Nov 22 05:30:18 compute-0 sudo[114765]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 05:30:18 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 05:30:18 compute-0 sudo[114918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqohgxiuedhatfdkmduvlqvnsjyhtvgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789418.3185875-62-51657699399895/AnsiballZ_dnf.py'
Nov 22 05:30:18 compute-0 sudo[114918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:18 compute-0 python3.9[114920]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:30:19 compute-0 ceph-mon[75840]: 10.4 deep-scrub starts
Nov 22 05:30:19 compute-0 ceph-mon[75840]: 10.4 deep-scrub ok
Nov 22 05:30:19 compute-0 ceph-mon[75840]: pgmap v285: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:19 compute-0 ceph-mon[75840]: 11.8 scrub starts
Nov 22 05:30:19 compute-0 ceph-mon[75840]: 11.8 scrub ok
Nov 22 05:30:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:19 compute-0 sudo[114918]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:20 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 22 05:30:20 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 22 05:30:20 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 22 05:30:20 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 22 05:30:20 compute-0 sudo[115071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumkcficfzlykcayxeyzbcsyjdajybyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789420.1593451-70-80664616939684/AnsiballZ_systemd.py'
Nov 22 05:30:20 compute-0 sudo[115071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:21 compute-0 ceph-mon[75840]: pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:21 compute-0 ceph-mon[75840]: 8.1b scrub starts
Nov 22 05:30:21 compute-0 ceph-mon[75840]: 8.1b scrub ok
Nov 22 05:30:21 compute-0 python3.9[115073]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:30:21 compute-0 sudo[115071]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 05:30:21 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 05:30:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:21 compute-0 python3.9[115226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:22 compute-0 ceph-mon[75840]: 10.17 scrub starts
Nov 22 05:30:22 compute-0 ceph-mon[75840]: 10.17 scrub ok
Nov 22 05:30:22 compute-0 ceph-mon[75840]: 8.7 scrub starts
Nov 22 05:30:22 compute-0 ceph-mon[75840]: 8.7 scrub ok
Nov 22 05:30:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:22 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 22 05:30:22 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 22 05:30:22 compute-0 sudo[115376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efbizqbxmipdgcqvxgrkctwnlvrutmos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789422.2220113-88-54110308469533/AnsiballZ_sefcontext.py'
Nov 22 05:30:22 compute-0 sudo[115376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:23 compute-0 python3.9[115378]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 05:30:23 compute-0 ceph-mon[75840]: pgmap v287: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:23 compute-0 sudo[115376]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 22 05:30:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 22 05:30:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:24 compute-0 ceph-mon[75840]: 10.16 scrub starts
Nov 22 05:30:24 compute-0 ceph-mon[75840]: 10.16 scrub ok
Nov 22 05:30:24 compute-0 ceph-mon[75840]: 8.8 scrub starts
Nov 22 05:30:24 compute-0 ceph-mon[75840]: 8.8 scrub ok
Nov 22 05:30:24 compute-0 python3.9[115528]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:25 compute-0 sudo[115684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilfidtflrhhywnfogewxjickumwnpkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789424.656439-106-125681768458232/AnsiballZ_dnf.py'
Nov 22 05:30:25 compute-0 sudo[115684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:25 compute-0 ceph-mon[75840]: pgmap v288: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:25 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 05:30:25 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 05:30:25 compute-0 python3.9[115686]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:30:25 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 22 05:30:25 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 22 05:30:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:26 compute-0 ceph-mon[75840]: 11.18 scrub starts
Nov 22 05:30:26 compute-0 ceph-mon[75840]: 11.18 scrub ok
Nov 22 05:30:26 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 22 05:30:26 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 22 05:30:26 compute-0 sudo[115684]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:27 compute-0 ceph-mon[75840]: 10.1 scrub starts
Nov 22 05:30:27 compute-0 ceph-mon[75840]: 10.1 scrub ok
Nov 22 05:30:27 compute-0 ceph-mon[75840]: pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:27 compute-0 sudo[115837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfeomcjbjmrkwrcklstvfazcwkcvrngv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789426.7750382-114-270411950264956/AnsiballZ_command.py'
Nov 22 05:30:27 compute-0 sudo[115837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Nov 22 05:30:27 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Nov 22 05:30:27 compute-0 python3.9[115839]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:30:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:28 compute-0 ceph-mon[75840]: 10.d scrub starts
Nov 22 05:30:28 compute-0 ceph-mon[75840]: 10.d scrub ok
Nov 22 05:30:28 compute-0 sudo[115837]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 22 05:30:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 22 05:30:29 compute-0 sudo[116124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sefywlgbyzwimuylmvntyvlirhhkmttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789428.4931335-122-134921538121407/AnsiballZ_file.py'
Nov 22 05:30:29 compute-0 sudo[116124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:29 compute-0 ceph-mon[75840]: 10.1e deep-scrub starts
Nov 22 05:30:29 compute-0 ceph-mon[75840]: 10.1e deep-scrub ok
Nov 22 05:30:29 compute-0 ceph-mon[75840]: pgmap v290: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:29 compute-0 ceph-mon[75840]: 10.e scrub starts
Nov 22 05:30:29 compute-0 ceph-mon[75840]: 10.e scrub ok
Nov 22 05:30:29 compute-0 python3.9[116126]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 05:30:29 compute-0 sudo[116124]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:30 compute-0 python3.9[116276]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:30:30 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 05:30:30 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 05:30:30 compute-0 sudo[116428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlislwukaxysdzqwdwufpburwwsydbrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789430.5196111-138-263806171191912/AnsiballZ_dnf.py'
Nov 22 05:30:30 compute-0 sudo[116428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:31 compute-0 ceph-mon[75840]: pgmap v291: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:31 compute-0 ceph-mon[75840]: 11.1b scrub starts
Nov 22 05:30:31 compute-0 ceph-mon[75840]: 11.1b scrub ok
Nov 22 05:30:31 compute-0 python3.9[116430]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:30:31 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Nov 22 05:30:31 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Nov 22 05:30:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 22 05:30:31 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 22 05:30:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:32 compute-0 ceph-mon[75840]: 11.1e deep-scrub starts
Nov 22 05:30:32 compute-0 ceph-mon[75840]: 11.1e deep-scrub ok
Nov 22 05:30:32 compute-0 ceph-mon[75840]: 11.14 scrub starts
Nov 22 05:30:32 compute-0 ceph-mon[75840]: 11.14 scrub ok
Nov 22 05:30:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 05:30:32 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 05:30:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 22 05:30:32 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 22 05:30:32 compute-0 sudo[116428]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:33 compute-0 sudo[116581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwisbnvnsthkhvullfwxednxfrourof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789432.735481-147-75134750179648/AnsiballZ_dnf.py'
Nov 22 05:30:33 compute-0 sudo[116581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:33 compute-0 ceph-mon[75840]: pgmap v292: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:33 compute-0 ceph-mon[75840]: 8.a scrub starts
Nov 22 05:30:33 compute-0 ceph-mon[75840]: 8.a scrub ok
Nov 22 05:30:33 compute-0 ceph-mon[75840]: 11.17 scrub starts
Nov 22 05:30:33 compute-0 ceph-mon[75840]: 11.17 scrub ok
Nov 22 05:30:33 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 05:30:33 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 05:30:33 compute-0 python3.9[116583]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:30:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:34 compute-0 ceph-mon[75840]: 11.11 scrub starts
Nov 22 05:30:34 compute-0 ceph-mon[75840]: 11.11 scrub ok
Nov 22 05:30:34 compute-0 sudo[116581]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:35 compute-0 ceph-mon[75840]: pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:35 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 05:30:35 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 05:30:35 compute-0 sudo[116734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmhglsvrxoocoitxcknfjkcinifretgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789435.022004-159-161166995336389/AnsiballZ_stat.py'
Nov 22 05:30:35 compute-0 sudo[116734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:35 compute-0 python3.9[116736]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:30:35 compute-0 sudo[116734]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:36 compute-0 ceph-mon[75840]: 8.13 scrub starts
Nov 22 05:30:36 compute-0 ceph-mon[75840]: 8.13 scrub ok
Nov 22 05:30:36 compute-0 sudo[116888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkykwgqptkmpzcprfjxcuiltsjgxyay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789435.8601055-167-149435773999740/AnsiballZ_slurp.py'
Nov 22 05:30:36 compute-0 sudo[116888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:36 compute-0 python3.9[116890]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 22 05:30:36 compute-0 sudo[116888]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:37 compute-0 ceph-mon[75840]: pgmap v294: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 22 05:30:37 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 22 05:30:37 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 22 05:30:37 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 22 05:30:37 compute-0 sshd-session[114224]: Connection closed by 192.168.122.30 port 41580
Nov 22 05:30:37 compute-0 sshd-session[114221]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:30:37 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 22 05:30:37 compute-0 systemd[1]: session-35.scope: Consumed 19.762s CPU time.
Nov 22 05:30:37 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Nov 22 05:30:37 compute-0 systemd-logind[798]: Removed session 35.
Nov 22 05:30:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:38 compute-0 ceph-mon[75840]: 11.1f scrub starts
Nov 22 05:30:38 compute-0 ceph-mon[75840]: 11.1f scrub ok
Nov 22 05:30:38 compute-0 ceph-mon[75840]: 8.14 scrub starts
Nov 22 05:30:38 compute-0 ceph-mon[75840]: 8.14 scrub ok
Nov 22 05:30:39 compute-0 ceph-mon[75840]: pgmap v295: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 05:30:39 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 05:30:39 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 22 05:30:39 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 22 05:30:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:40 compute-0 ceph-mon[75840]: 8.16 scrub starts
Nov 22 05:30:40 compute-0 ceph-mon[75840]: 8.16 scrub ok
Nov 22 05:30:40 compute-0 ceph-mon[75840]: 8.c scrub starts
Nov 22 05:30:40 compute-0 ceph-mon[75840]: 8.c scrub ok
Nov 22 05:30:40 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 22 05:30:40 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 22 05:30:41 compute-0 ceph-mon[75840]: pgmap v296: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:41 compute-0 ceph-mon[75840]: 11.1 deep-scrub starts
Nov 22 05:30:41 compute-0 ceph-mon[75840]: 11.1 deep-scrub ok
Nov 22 05:30:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 05:30:41 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 05:30:41 compute-0 sudo[116915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:41 compute-0 sudo[116915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:41 compute-0 sudo[116915]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:41 compute-0 sudo[116940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:30:41 compute-0 sudo[116940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:41 compute-0 sudo[116940]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:41 compute-0 sudo[116965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:41 compute-0 sudo[116965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:41 compute-0 sudo[116965]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:41 compute-0 sudo[116990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:30:41 compute-0 sudo[116990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:42 compute-0 ceph-mon[75840]: 8.17 scrub starts
Nov 22 05:30:42 compute-0 ceph-mon[75840]: 8.17 scrub ok
Nov 22 05:30:42 compute-0 sudo[116990]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:42 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3f1756ae-39d6-435f-b817-d438424d80c8 does not exist
Nov 22 05:30:42 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c83876d6-0b24-4d22-ab67-f6787c740985 does not exist
Nov 22 05:30:42 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 36ab5fbc-fb9f-4e99-9a7a-28807d8cbd8f does not exist
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:30:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:30:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:30:42 compute-0 sudo[117045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:42 compute-0 sudo[117045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:42 compute-0 sudo[117045]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:42 compute-0 sudo[117070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:30:42 compute-0 sudo[117070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:42 compute-0 sudo[117070]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:42 compute-0 sudo[117095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:42 compute-0 sudo[117095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:42 compute-0 sudo[117095]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:42 compute-0 sudo[117120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:30:42 compute-0 sudo[117120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 05:30:43 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 05:30:43 compute-0 ceph-mon[75840]: pgmap v297: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:30:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:30:43 compute-0 sshd-session[117171]: Accepted publickey for zuul from 192.168.122.30 port 43246 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:30:43 compute-0 systemd-logind[798]: New session 36 of user zuul.
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.301462367 +0000 UTC m=+0.050231752 container create 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:30:43 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 22 05:30:43 compute-0 sshd-session[117171]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:30:43 compute-0 systemd[1]: Started libpod-conmon-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope.
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.275869039 +0000 UTC m=+0.024638454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.397575736 +0000 UTC m=+0.146345201 container init 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.410555759 +0000 UTC m=+0.159325134 container start 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.414698269 +0000 UTC m=+0.163467724 container attach 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:30:43 compute-0 bold_dewdney[117206]: 167 167
Nov 22 05:30:43 compute-0 systemd[1]: libpod-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope: Deactivated successfully.
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.421379257 +0000 UTC m=+0.170148702 container died 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:30:43 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 22 05:30:43 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 22 05:30:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e1c30bd2d42116d305870d8f433f99d3b719c36f240894bb4ada205abc9eb8-merged.mount: Deactivated successfully.
Nov 22 05:30:43 compute-0 podman[117187]: 2025-11-22 05:30:43.478757568 +0000 UTC m=+0.227526983 container remove 942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dewdney, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:30:43 compute-0 systemd[1]: libpod-conmon-942d1ea52d5e22cd15d7d9a1aba429d843d067fed8cbbf0d9cb5bb01946452ab.scope: Deactivated successfully.
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:30:43
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr']
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:30:43 compute-0 podman[117281]: 2025-11-22 05:30:43.695912454 +0000 UTC m=+0.056691083 container create 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:30:43 compute-0 podman[117281]: 2025-11-22 05:30:43.667929272 +0000 UTC m=+0.028708011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:43 compute-0 systemd[1]: Started libpod-conmon-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope.
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:30:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:30:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:43 compute-0 podman[117281]: 2025-11-22 05:30:43.879341617 +0000 UTC m=+0.240120266 container init 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:30:43 compute-0 podman[117281]: 2025-11-22 05:30:43.889162507 +0000 UTC m=+0.249941166 container start 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:30:43 compute-0 podman[117281]: 2025-11-22 05:30:43.898444033 +0000 UTC m=+0.259222702 container attach 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:30:44 compute-0 ceph-mon[75840]: 8.4 scrub starts
Nov 22 05:30:44 compute-0 ceph-mon[75840]: 8.4 scrub ok
Nov 22 05:30:44 compute-0 ceph-mon[75840]: 11.f scrub starts
Nov 22 05:30:44 compute-0 ceph-mon[75840]: 11.f scrub ok
Nov 22 05:30:44 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 22 05:30:44 compute-0 python3.9[117401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:44 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 22 05:30:45 compute-0 flamboyant_keller[117297]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:30:45 compute-0 flamboyant_keller[117297]: --> relative data size: 1.0
Nov 22 05:30:45 compute-0 flamboyant_keller[117297]: --> All data devices are unavailable
Nov 22 05:30:45 compute-0 podman[117281]: 2025-11-22 05:30:45.090711489 +0000 UTC m=+1.451490188 container died 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:30:45 compute-0 systemd[1]: libpod-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Deactivated successfully.
Nov 22 05:30:45 compute-0 systemd[1]: libpod-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Consumed 1.143s CPU time.
Nov 22 05:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d6d7118d8583f236da76a81d33cf6fc15989d3693cb1fd0821aa472b2f4b762-merged.mount: Deactivated successfully.
Nov 22 05:30:45 compute-0 podman[117281]: 2025-11-22 05:30:45.173876494 +0000 UTC m=+1.534655133 container remove 861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_keller, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:30:45 compute-0 systemd[1]: libpod-conmon-861bd2b12a170cd505ab660566d22d9ab96d4309297ae4eada968da760c0988d.scope: Deactivated successfully.
Nov 22 05:30:45 compute-0 sudo[117120]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:45 compute-0 ceph-mon[75840]: pgmap v298: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:45 compute-0 ceph-mon[75840]: 8.e scrub starts
Nov 22 05:30:45 compute-0 ceph-mon[75840]: 8.e scrub ok
Nov 22 05:30:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 05:30:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 05:30:45 compute-0 sudo[117541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:45 compute-0 sudo[117541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:45 compute-0 sudo[117541]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:45 compute-0 sudo[117596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:30:45 compute-0 sudo[117596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:45 compute-0 sudo[117596]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:45 compute-0 sudo[117641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:45 compute-0 sudo[117641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:45 compute-0 sudo[117641]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:45 compute-0 sudo[117666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:30:45 compute-0 sudo[117666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:45 compute-0 python3.9[117636]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:30:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:45 compute-0 podman[117759]: 2025-11-22 05:30:45.964135513 +0000 UTC m=+0.052194535 container create 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:30:46 compute-0 systemd[1]: Started libpod-conmon-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope.
Nov 22 05:30:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:45.949052773 +0000 UTC m=+0.037111795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:46.065827779 +0000 UTC m=+0.153886851 container init 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:46.076276346 +0000 UTC m=+0.164335408 container start 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:30:46 compute-0 nervous_ritchie[117789]: 167 167
Nov 22 05:30:46 compute-0 systemd[1]: libpod-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope: Deactivated successfully.
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:46.081911815 +0000 UTC m=+0.169970947 container attach 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:46.082696716 +0000 UTC m=+0.170755778 container died 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ca365b306d057f3201adcc6d8afe8df75853c706e8e508e05b730de33683eb8-merged.mount: Deactivated successfully.
Nov 22 05:30:46 compute-0 podman[117759]: 2025-11-22 05:30:46.125352426 +0000 UTC m=+0.213411448 container remove 7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:30:46 compute-0 systemd[1]: libpod-conmon-7fb7a1267dda0e75422231a5754bb00e52f7ed8846b8ee9f59291095d98987cc.scope: Deactivated successfully.
Nov 22 05:30:46 compute-0 ceph-mon[75840]: 8.19 scrub starts
Nov 22 05:30:46 compute-0 ceph-mon[75840]: 8.19 scrub ok
Nov 22 05:30:46 compute-0 podman[117860]: 2025-11-22 05:30:46.326626152 +0000 UTC m=+0.047046058 container create f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:30:46 compute-0 systemd[1]: Started libpod-conmon-f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea.scope.
Nov 22 05:30:46 compute-0 podman[117860]: 2025-11-22 05:30:46.30578774 +0000 UTC m=+0.026207626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:46 compute-0 podman[117860]: 2025-11-22 05:30:46.425956075 +0000 UTC m=+0.146376031 container init f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:30:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 22 05:30:46 compute-0 podman[117860]: 2025-11-22 05:30:46.439303889 +0000 UTC m=+0.159723755 container start f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:30:46 compute-0 podman[117860]: 2025-11-22 05:30:46.444563048 +0000 UTC m=+0.164982954 container attach f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:30:46 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 22 05:30:46 compute-0 python3.9[117983]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:30:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:47 compute-0 boring_perlman[117905]: {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     "0": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "devices": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "/dev/loop3"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             ],
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_name": "ceph_lv0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_size": "21470642176",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "name": "ceph_lv0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "tags": {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_name": "ceph",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.crush_device_class": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.encrypted": "0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_id": "0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.vdo": "0"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             },
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "vg_name": "ceph_vg0"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         }
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     ],
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     "1": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "devices": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "/dev/loop4"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             ],
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_name": "ceph_lv1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_size": "21470642176",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "name": "ceph_lv1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "tags": {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_name": "ceph",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.crush_device_class": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.encrypted": "0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_id": "1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.vdo": "0"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             },
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "vg_name": "ceph_vg1"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         }
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     ],
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     "2": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "devices": [
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "/dev/loop5"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             ],
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_name": "ceph_lv2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_size": "21470642176",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "name": "ceph_lv2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "tags": {
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.cluster_name": "ceph",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.crush_device_class": "",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.encrypted": "0",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osd_id": "2",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:                 "ceph.vdo": "0"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             },
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "type": "block",
Nov 22 05:30:47 compute-0 boring_perlman[117905]:             "vg_name": "ceph_vg2"
Nov 22 05:30:47 compute-0 boring_perlman[117905]:         }
Nov 22 05:30:47 compute-0 boring_perlman[117905]:     ]
Nov 22 05:30:47 compute-0 boring_perlman[117905]: }
Nov 22 05:30:47 compute-0 systemd[1]: libpod-f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea.scope: Deactivated successfully.
Nov 22 05:30:47 compute-0 podman[117860]: 2025-11-22 05:30:47.192162217 +0000 UTC m=+0.912582113 container died f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:30:47 compute-0 rsyslogd[1005]: imjournal from <np0005531754:boring_perlman>: begin to drop messages due to rate-limiting
Nov 22 05:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-370e287c9905213680a5b2a822144c3e15630d30b5a1d3a51e1b31d6bcb694d7-merged.mount: Deactivated successfully.
Nov 22 05:30:47 compute-0 ceph-mon[75840]: pgmap v299: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:47 compute-0 ceph-mon[75840]: 11.e scrub starts
Nov 22 05:30:47 compute-0 ceph-mon[75840]: 11.e scrub ok
Nov 22 05:30:47 compute-0 podman[117860]: 2025-11-22 05:30:47.277313854 +0000 UTC m=+0.997733760 container remove f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:30:47 compute-0 systemd[1]: libpod-conmon-f2920052c8fe5f40f662be2df7296099b92a086a67910aed441cd58034af8eea.scope: Deactivated successfully.
Nov 22 05:30:47 compute-0 sudo[117666]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:47 compute-0 sudo[118024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:47 compute-0 sudo[118024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:47 compute-0 sudo[118024]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:47 compute-0 sshd-session[117202]: Connection closed by 192.168.122.30 port 43246
Nov 22 05:30:47 compute-0 sshd-session[117171]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:30:47 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 22 05:30:47 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 22 05:30:47 compute-0 systemd[1]: session-36.scope: Consumed 2.784s CPU time.
Nov 22 05:30:47 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Nov 22 05:30:47 compute-0 systemd-logind[798]: Removed session 36.
Nov 22 05:30:47 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 22 05:30:47 compute-0 sudo[118049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:30:47 compute-0 sudo[118049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:47 compute-0 sudo[118049]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:47 compute-0 sudo[118075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:47 compute-0 sudo[118075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:47 compute-0 sudo[118075]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:47 compute-0 sudo[118100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:30:47 compute-0 sudo[118100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.166649519 +0000 UTC m=+0.066462942 container create 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:30:48 compute-0 systemd[1]: Started libpod-conmon-6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719.scope.
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.140193608 +0000 UTC m=+0.040007121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:48 compute-0 ceph-mon[75840]: 8.10 scrub starts
Nov 22 05:30:48 compute-0 ceph-mon[75840]: 8.10 scrub ok
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.274874788 +0000 UTC m=+0.174688301 container init 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.285733956 +0000 UTC m=+0.185547399 container start 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.290149633 +0000 UTC m=+0.189963086 container attach 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:30:48 compute-0 relaxed_zhukovsky[118181]: 167 167
Nov 22 05:30:48 compute-0 systemd[1]: libpod-6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719.scope: Deactivated successfully.
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.293659447 +0000 UTC m=+0.193472890 container died 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c22d95d3680f6d00d7df5554008caf84704d017193af1badee89ffcdaae7c60f-merged.mount: Deactivated successfully.
Nov 22 05:30:48 compute-0 podman[118165]: 2025-11-22 05:30:48.348429999 +0000 UTC m=+0.248243452 container remove 6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_zhukovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:30:48 compute-0 systemd[1]: libpod-conmon-6f2d59ba98af19f066eb1a9bdd7f90731bcc92d6c38d09236a5fa5321f911719.scope: Deactivated successfully.
Nov 22 05:30:48 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 22 05:30:48 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 22 05:30:48 compute-0 podman[118205]: 2025-11-22 05:30:48.587594809 +0000 UTC m=+0.055330488 container create f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:30:48 compute-0 systemd[1]: Started libpod-conmon-f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7.scope.
Nov 22 05:30:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b44f785c0f6af5527f7384a6e68eafcf26988914a35b2ce0d1fac06cd39521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b44f785c0f6af5527f7384a6e68eafcf26988914a35b2ce0d1fac06cd39521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b44f785c0f6af5527f7384a6e68eafcf26988914a35b2ce0d1fac06cd39521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b44f785c0f6af5527f7384a6e68eafcf26988914a35b2ce0d1fac06cd39521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:30:48 compute-0 podman[118205]: 2025-11-22 05:30:48.573012892 +0000 UTC m=+0.040748591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:30:48 compute-0 podman[118205]: 2025-11-22 05:30:48.671071441 +0000 UTC m=+0.138807140 container init f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:30:48 compute-0 podman[118205]: 2025-11-22 05:30:48.676220258 +0000 UTC m=+0.143955937 container start f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:30:48 compute-0 podman[118205]: 2025-11-22 05:30:48.679811274 +0000 UTC m=+0.147546983 container attach f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:30:49 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 22 05:30:49 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 22 05:30:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 22 05:30:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 22 05:30:49 compute-0 ceph-mon[75840]: pgmap v300: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:49 compute-0 ceph-mon[75840]: 8.f scrub starts
Nov 22 05:30:49 compute-0 ceph-mon[75840]: 8.f scrub ok
Nov 22 05:30:49 compute-0 sharp_burnell[118221]: {
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_id": 1,
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "type": "bluestore"
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     },
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_id": 2,
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "type": "bluestore"
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     },
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_id": 0,
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:         "type": "bluestore"
Nov 22 05:30:49 compute-0 sharp_burnell[118221]:     }
Nov 22 05:30:49 compute-0 sharp_burnell[118221]: }
Nov 22 05:30:49 compute-0 systemd[1]: libpod-f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7.scope: Deactivated successfully.
Nov 22 05:30:49 compute-0 podman[118205]: 2025-11-22 05:30:49.760897742 +0000 UTC m=+1.228633461 container died f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:30:49 compute-0 systemd[1]: libpod-f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7.scope: Consumed 1.084s CPU time.
Nov 22 05:30:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b44f785c0f6af5527f7384a6e68eafcf26988914a35b2ce0d1fac06cd39521-merged.mount: Deactivated successfully.
Nov 22 05:30:49 compute-0 podman[118205]: 2025-11-22 05:30:49.831346389 +0000 UTC m=+1.299082108 container remove f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 05:30:49 compute-0 systemd[1]: libpod-conmon-f29133e06eb8118f9f6c12a6c3438b7dbebe1647ad36669d435601731f7fd1e7.scope: Deactivated successfully.
Nov 22 05:30:49 compute-0 sudo[118100]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:30:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.902731) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449902885, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7193, "num_deletes": 251, "total_data_size": 8790492, "memory_usage": 8951056, "flush_reason": "Manual Compaction"}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 22 05:30:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:49 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 60dba36a-4db3-43bb-8771-419323be5411 does not exist
Nov 22 05:30:49 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5cb40868-a5ea-45ce-bd56-c6587888fe3f does not exist
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449948879, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7088840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7331, "table_properties": {"data_size": 7062446, "index_size": 17255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74760, "raw_average_key_size": 23, "raw_value_size": 7000327, "raw_average_value_size": 2172, "num_data_blocks": 758, "num_entries": 3222, "num_filter_entries": 3222, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789037, "oldest_key_time": 1763789037, "file_creation_time": 1763789449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 46219 microseconds, and 26744 cpu microseconds.
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.948954) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7088840 bytes OK
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.948989) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.950851) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.950867) EVENT_LOG_v1 {"time_micros": 1763789449950862, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.950897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8759306, prev total WAL file size 8798729, number of live WAL files 2.
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.953385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(6922KB) 13(53KB) 8(1944B)]
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449953518, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7146037, "oldest_snapshot_seqno": -1}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3038 keys, 7101420 bytes, temperature: kUnknown
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449992349, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7101420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7075421, "index_size": 17313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72832, "raw_average_key_size": 23, "raw_value_size": 7014873, "raw_average_value_size": 2309, "num_data_blocks": 762, "num_entries": 3038, "num_filter_entries": 3038, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.992634) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7101420 bytes
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.994231) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.7 rd, 182.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(6.8, 0.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3328, records dropped: 290 output_compression: NoCompression
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.994254) EVENT_LOG_v1 {"time_micros": 1763789449994242, "job": 4, "event": "compaction_finished", "compaction_time_micros": 38908, "compaction_time_cpu_micros": 17814, "output_level": 6, "num_output_files": 1, "total_output_size": 7101420, "num_input_records": 3328, "num_output_records": 3038, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449995680, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449995752, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789449995789, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 22 05:30:49 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:30:49.953251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:30:50 compute-0 sudo[118269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:30:50 compute-0 sudo[118269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:50 compute-0 sudo[118269]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:50 compute-0 sudo[118294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:30:50 compute-0 sudo[118294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:30:50 compute-0 sudo[118294]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 22 05:30:50 compute-0 ceph-mon[75840]: 11.1a scrub starts
Nov 22 05:30:50 compute-0 ceph-mon[75840]: 11.1a scrub ok
Nov 22 05:30:50 compute-0 ceph-mon[75840]: 8.1e scrub starts
Nov 22 05:30:50 compute-0 ceph-mon[75840]: 8.1e scrub ok
Nov 22 05:30:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:30:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 22 05:30:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 22 05:30:51 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 22 05:30:51 compute-0 ceph-mon[75840]: pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:51 compute-0 ceph-mon[75840]: 9.2 scrub starts
Nov 22 05:30:51 compute-0 ceph-mon[75840]: 9.2 scrub ok
Nov 22 05:30:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:52 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 22 05:30:52 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 22 05:30:52 compute-0 ceph-mon[75840]: 11.1c scrub starts
Nov 22 05:30:52 compute-0 ceph-mon[75840]: 11.1c scrub ok
Nov 22 05:30:52 compute-0 sshd-session[118319]: Accepted publickey for zuul from 192.168.122.30 port 50394 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:30:52 compute-0 systemd-logind[798]: New session 37 of user zuul.
Nov 22 05:30:52 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 22 05:30:52 compute-0 sshd-session[118319]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:30:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:30:53 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 22 05:30:53 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 22 05:30:53 compute-0 ceph-mon[75840]: pgmap v302: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:53 compute-0 ceph-mon[75840]: 8.1c scrub starts
Nov 22 05:30:53 compute-0 ceph-mon[75840]: 8.1c scrub ok
Nov 22 05:30:53 compute-0 python3.9[118472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:54 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 22 05:30:54 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 22 05:30:54 compute-0 ceph-mon[75840]: 11.12 scrub starts
Nov 22 05:30:54 compute-0 ceph-mon[75840]: 11.12 scrub ok
Nov 22 05:30:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 22 05:30:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 22 05:30:54 compute-0 python3.9[118626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:30:55 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 22 05:30:55 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 22 05:30:55 compute-0 ceph-mon[75840]: pgmap v303: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:55 compute-0 ceph-mon[75840]: 8.11 scrub starts
Nov 22 05:30:55 compute-0 ceph-mon[75840]: 8.11 scrub ok
Nov 22 05:30:55 compute-0 ceph-mon[75840]: 9.4 scrub starts
Nov 22 05:30:55 compute-0 ceph-mon[75840]: 9.4 scrub ok
Nov 22 05:30:55 compute-0 sudo[118780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzblhrkehplnorvysdajrblmucyhwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789455.0143383-40-40029943077292/AnsiballZ_setup.py'
Nov 22 05:30:55 compute-0 sudo[118780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:55 compute-0 python3.9[118782]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:30:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:55 compute-0 sudo[118780]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:56 compute-0 ceph-mon[75840]: 8.12 deep-scrub starts
Nov 22 05:30:56 compute-0 ceph-mon[75840]: 8.12 deep-scrub ok
Nov 22 05:30:56 compute-0 sudo[118864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govqhhmsyelmoewqfbbutzlscgshrtio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789455.0143383-40-40029943077292/AnsiballZ_dnf.py'
Nov 22 05:30:56 compute-0 sudo[118864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:56 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 22 05:30:56 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 22 05:30:56 compute-0 python3.9[118866]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:30:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:30:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 22 05:30:57 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 22 05:30:57 compute-0 ceph-mon[75840]: pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:57 compute-0 ceph-mon[75840]: 9.a scrub starts
Nov 22 05:30:57 compute-0 ceph-mon[75840]: 9.a scrub ok
Nov 22 05:30:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 22 05:30:57 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 22 05:30:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:57 compute-0 sudo[118864]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 22 05:30:58 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 22 05:30:58 compute-0 ceph-mon[75840]: 8.b scrub starts
Nov 22 05:30:58 compute-0 ceph-mon[75840]: 8.b scrub ok
Nov 22 05:30:58 compute-0 ceph-mon[75840]: 9.10 scrub starts
Nov 22 05:30:58 compute-0 ceph-mon[75840]: 9.10 scrub ok
Nov 22 05:30:58 compute-0 sudo[119017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meyapajqafnusplpemkqadyoresuinvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789458.0996654-52-102241434859385/AnsiballZ_setup.py'
Nov 22 05:30:58 compute-0 sudo[119017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:30:58 compute-0 python3.9[119019]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:30:59 compute-0 sudo[119017]: pam_unix(sudo:session): session closed for user root
Nov 22 05:30:59 compute-0 ceph-mon[75840]: pgmap v305: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:59 compute-0 ceph-mon[75840]: 11.4 scrub starts
Nov 22 05:30:59 compute-0 ceph-mon[75840]: 11.4 scrub ok
Nov 22 05:30:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 22 05:30:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 22 05:30:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:30:59 compute-0 sudo[119212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvajdjbvabxqwlxqqzoofxkgetuvzrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789459.3666816-63-23498276737235/AnsiballZ_file.py'
Nov 22 05:30:59 compute-0 sudo[119212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:00 compute-0 python3.9[119214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:00 compute-0 sudo[119212]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:00 compute-0 ceph-mon[75840]: 9.12 scrub starts
Nov 22 05:31:00 compute-0 ceph-mon[75840]: 9.12 scrub ok
Nov 22 05:31:00 compute-0 sudo[119364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aadjewwdsgouukacirhosmhhfnizeuue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789460.2465856-71-76675812911814/AnsiballZ_command.py'
Nov 22 05:31:00 compute-0 sudo[119364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:00 compute-0 python3.9[119366]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:31:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 22 05:31:01 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 22 05:31:01 compute-0 sudo[119364]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:01 compute-0 ceph-mon[75840]: pgmap v306: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 22 05:31:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 22 05:31:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:01 compute-0 sudo[119529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raedyqodabujwoolyzwpmvhzdrbpdnxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789461.2993522-79-42865993551061/AnsiballZ_stat.py'
Nov 22 05:31:01 compute-0 sudo[119529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:02 compute-0 python3.9[119531]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:02 compute-0 sudo[119529]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:02 compute-0 sudo[119607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bosixnsleruwqpjtrbgdmtoimwhzdefs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789461.2993522-79-42865993551061/AnsiballZ_file.py'
Nov 22 05:31:02 compute-0 sudo[119607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:02 compute-0 ceph-mon[75840]: 9.e deep-scrub starts
Nov 22 05:31:02 compute-0 ceph-mon[75840]: 9.e deep-scrub ok
Nov 22 05:31:02 compute-0 python3.9[119609]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:02 compute-0 sudo[119607]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:03 compute-0 sudo[119759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahdhcsxlzjyykhhsxlpgcagkmzoqctwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789462.7589097-91-173858474002718/AnsiballZ_stat.py'
Nov 22 05:31:03 compute-0 sudo[119759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:03 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 22 05:31:03 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 22 05:31:03 compute-0 python3.9[119761]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:03 compute-0 ceph-mon[75840]: 9.14 scrub starts
Nov 22 05:31:03 compute-0 ceph-mon[75840]: 9.14 scrub ok
Nov 22 05:31:03 compute-0 ceph-mon[75840]: pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:03 compute-0 sudo[119759]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:03 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 22 05:31:03 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 22 05:31:03 compute-0 sudo[119837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpljidbjowzuoqtoduxkzxgbrvvxsvqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789462.7589097-91-173858474002718/AnsiballZ_file.py'
Nov 22 05:31:03 compute-0 sudo[119837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:03 compute-0 python3.9[119839]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:03 compute-0 sudo[119837]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:04 compute-0 ceph-mon[75840]: 8.6 scrub starts
Nov 22 05:31:04 compute-0 ceph-mon[75840]: 8.6 scrub ok
Nov 22 05:31:04 compute-0 ceph-mon[75840]: 9.1a scrub starts
Nov 22 05:31:04 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 22 05:31:04 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 22 05:31:04 compute-0 sudo[119989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulnthzfdsilleivyhntsnxpijvngjnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789464.1861136-104-75905523159789/AnsiballZ_ini_file.py'
Nov 22 05:31:04 compute-0 sudo[119989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:04 compute-0 python3.9[119991]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:04 compute-0 sudo[119989]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 22 05:31:05 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 22 05:31:05 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Nov 22 05:31:05 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Nov 22 05:31:05 compute-0 ceph-mon[75840]: 9.1a scrub ok
Nov 22 05:31:05 compute-0 ceph-mon[75840]: pgmap v308: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:05 compute-0 sudo[120141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspfolwpibryrkcdnhlwvtdlpueqrfvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789465.0590117-104-129480763452815/AnsiballZ_ini_file.py'
Nov 22 05:31:05 compute-0 sudo[120141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:05 compute-0 python3.9[120143]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:05 compute-0 sudo[120141]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v309: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:06 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 22 05:31:06 compute-0 sudo[120293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpwaooyjlrjupvriotmnmwayohxqymim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789465.904592-104-140636832887619/AnsiballZ_ini_file.py'
Nov 22 05:31:06 compute-0 sudo[120293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:06 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 11.5 scrub starts
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 11.5 scrub ok
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 9.6 scrub starts
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 9.6 scrub ok
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 11.6 deep-scrub starts
Nov 22 05:31:06 compute-0 ceph-mon[75840]: 11.6 deep-scrub ok
Nov 22 05:31:06 compute-0 python3.9[120295]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:06 compute-0 sudo[120293]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:07 compute-0 sudo[120445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyewisrqnfdiazobgucqjozvlunytolv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789466.6626778-104-194610038871640/AnsiballZ_ini_file.py'
Nov 22 05:31:07 compute-0 sudo[120445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:07 compute-0 python3.9[120447]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:07 compute-0 sudo[120445]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:07 compute-0 ceph-mon[75840]: pgmap v309: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:07 compute-0 ceph-mon[75840]: 8.18 scrub starts
Nov 22 05:31:07 compute-0 ceph-mon[75840]: 8.18 scrub ok
Nov 22 05:31:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:07 compute-0 sudo[120597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqaimftrgsayrefpfmqnoxrrnfozfow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789467.4743948-135-69480813720716/AnsiballZ_dnf.py'
Nov 22 05:31:07 compute-0 sudo[120597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:08 compute-0 python3.9[120599]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:31:09 compute-0 ceph-mon[75840]: pgmap v310: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v311: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 22 05:31:10 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 22 05:31:10 compute-0 sudo[120597]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:11 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 22 05:31:11 compute-0 sudo[120750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmldwcjsjzxghcodkoyrynuigjtfuegv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789470.6918178-146-19178920519270/AnsiballZ_setup.py'
Nov 22 05:31:11 compute-0 sudo[120750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:11 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 22 05:31:11 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 22 05:31:11 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 22 05:31:11 compute-0 python3.9[120752]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:31:11 compute-0 sudo[120750]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:11 compute-0 ceph-mon[75840]: pgmap v311: 321 pgs: 321 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:11 compute-0 ceph-mon[75840]: 8.1f scrub starts
Nov 22 05:31:11 compute-0 ceph-mon[75840]: 8.1f scrub ok
Nov 22 05:31:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 22 05:31:12 compute-0 sudo[120904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzzuhnvmtzmyvmtyribykilelgfumvvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789471.6810114-154-6779624505525/AnsiballZ_stat.py'
Nov 22 05:31:12 compute-0 sudo[120904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 22 05:31:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:12 compute-0 python3.9[120906]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:31:12 compute-0 sudo[120904]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:12 compute-0 ceph-mon[75840]: 9.17 scrub starts
Nov 22 05:31:12 compute-0 ceph-mon[75840]: 9.17 scrub ok
Nov 22 05:31:12 compute-0 ceph-mon[75840]: 8.1d scrub starts
Nov 22 05:31:12 compute-0 ceph-mon[75840]: 8.1d scrub ok
Nov 22 05:31:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 22 05:31:12 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 22 05:31:12 compute-0 sudo[121056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywzuazliqmxuqhszmglvmjeadoukeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789472.542652-163-108008841712340/AnsiballZ_stat.py'
Nov 22 05:31:12 compute-0 sudo[121056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:12 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 22 05:31:13 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 22 05:31:13 compute-0 python3.9[121058]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:31:13 compute-0 sudo[121056]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:13 compute-0 ceph-mon[75840]: pgmap v312: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:13 compute-0 ceph-mon[75840]: 9.f scrub starts
Nov 22 05:31:13 compute-0 ceph-mon[75840]: 9.f scrub ok
Nov 22 05:31:13 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 22 05:31:13 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:13 compute-0 sudo[121208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhqhdlkymcbkzerhjxeasmomrpvovoho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789473.5163336-173-99902376699798/AnsiballZ_command.py'
Nov 22 05:31:13 compute-0 sudo[121208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:14 compute-0 python3.9[121210]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:31:14 compute-0 sudo[121208]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:14 compute-0 ceph-mon[75840]: 11.7 deep-scrub starts
Nov 22 05:31:14 compute-0 ceph-mon[75840]: 11.7 deep-scrub ok
Nov 22 05:31:14 compute-0 ceph-mon[75840]: 9.7 scrub starts
Nov 22 05:31:14 compute-0 ceph-mon[75840]: 9.7 scrub ok
Nov 22 05:31:14 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Nov 22 05:31:14 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Nov 22 05:31:15 compute-0 sudo[121361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grdomdmwwzgvdwfgqshjvhcifkxtftno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789474.5033753-183-117452185004149/AnsiballZ_service_facts.py'
Nov 22 05:31:15 compute-0 sudo[121361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:15 compute-0 python3.9[121363]: ansible-service_facts Invoked
Nov 22 05:31:15 compute-0 network[121380]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:31:15 compute-0 network[121381]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:31:15 compute-0 network[121382]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:31:15 compute-0 ceph-mon[75840]: 11.a scrub starts
Nov 22 05:31:15 compute-0 ceph-mon[75840]: 11.a scrub ok
Nov 22 05:31:15 compute-0 ceph-mon[75840]: pgmap v313: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:16 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 22 05:31:16 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 22 05:31:16 compute-0 ceph-mon[75840]: 11.c deep-scrub starts
Nov 22 05:31:16 compute-0 ceph-mon[75840]: 11.c deep-scrub ok
Nov 22 05:31:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 22 05:31:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 22 05:31:17 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 22 05:31:17 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 22 05:31:17 compute-0 ceph-mon[75840]: pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:17 compute-0 ceph-mon[75840]: 9.8 scrub starts
Nov 22 05:31:17 compute-0 ceph-mon[75840]: 9.8 scrub ok
Nov 22 05:31:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:18 compute-0 ceph-mon[75840]: 9.18 scrub starts
Nov 22 05:31:18 compute-0 ceph-mon[75840]: 8.9 scrub starts
Nov 22 05:31:18 compute-0 ceph-mon[75840]: 9.18 scrub ok
Nov 22 05:31:18 compute-0 ceph-mon[75840]: 8.9 scrub ok
Nov 22 05:31:18 compute-0 ceph-mon[75840]: pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:19 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 22 05:31:19 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 22 05:31:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 22 05:31:19 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 22 05:31:19 compute-0 ceph-mon[75840]: 11.13 scrub starts
Nov 22 05:31:19 compute-0 ceph-mon[75840]: 11.13 scrub ok
Nov 22 05:31:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:19 compute-0 sudo[121361]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:20 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 22 05:31:20 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 22 05:31:20 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 22 05:31:20 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 22 05:31:20 compute-0 ceph-mon[75840]: 11.10 scrub starts
Nov 22 05:31:20 compute-0 ceph-mon[75840]: 11.10 scrub ok
Nov 22 05:31:20 compute-0 ceph-mon[75840]: pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:20 compute-0 ceph-mon[75840]: 11.16 scrub starts
Nov 22 05:31:20 compute-0 ceph-mon[75840]: 11.16 scrub ok
Nov 22 05:31:20 compute-0 sudo[121665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyivpqbhdwehwbzmmrwzajyhuxjyysny ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763789480.506294-198-65082357167429/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763789480.506294-198-65082357167429/args'
Nov 22 05:31:20 compute-0 sudo[121665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:20 compute-0 sudo[121665]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:21 compute-0 ceph-mon[75840]: 9.c scrub starts
Nov 22 05:31:21 compute-0 ceph-mon[75840]: 9.c scrub ok
Nov 22 05:31:21 compute-0 sudo[121832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opmlofkkdlyfbejblzprowivhpdwukzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789481.4044783-209-9477300303783/AnsiballZ_dnf.py'
Nov 22 05:31:21 compute-0 sudo[121832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:21 compute-0 python3.9[121834]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:31:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:22 compute-0 ceph-mon[75840]: pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:23 compute-0 sudo[121832]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 22 05:31:23 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 22 05:31:23 compute-0 ceph-mon[75840]: 11.1d scrub starts
Nov 22 05:31:23 compute-0 ceph-mon[75840]: 11.1d scrub ok
Nov 22 05:31:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:24 compute-0 sudo[121985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inbsctgogvowsfdflslykaduactgxdwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789483.5160198-222-123709717276996/AnsiballZ_package_facts.py'
Nov 22 05:31:24 compute-0 sudo[121985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:24 compute-0 python3.9[121987]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 05:31:24 compute-0 ceph-mon[75840]: pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:24 compute-0 sudo[121985]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:25 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 22 05:31:25 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 22 05:31:25 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 22 05:31:25 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 22 05:31:25 compute-0 sudo[122137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutchqsyaauovcfnqysaynfwskmcnrqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789485.2210116-232-193313859120774/AnsiballZ_stat.py'
Nov 22 05:31:25 compute-0 sudo[122137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:25 compute-0 ceph-mon[75840]: 9.13 deep-scrub starts
Nov 22 05:31:25 compute-0 ceph-mon[75840]: 9.13 deep-scrub ok
Nov 22 05:31:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:25 compute-0 python3.9[122139]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:25 compute-0 sudo[122137]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:26 compute-0 sudo[122215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvpxiiundhocjjisuulspgmgczmhsvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789485.2210116-232-193313859120774/AnsiballZ_file.py'
Nov 22 05:31:26 compute-0 sudo[122215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:26 compute-0 python3.9[122217]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:26 compute-0 sudo[122215]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:26 compute-0 ceph-mon[75840]: 11.19 scrub starts
Nov 22 05:31:26 compute-0 ceph-mon[75840]: 11.19 scrub ok
Nov 22 05:31:26 compute-0 ceph-mon[75840]: pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:26 compute-0 sudo[122367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dumogrydwsxioxcvaqcqmiinkkcwkfzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789486.6110272-244-199413707449426/AnsiballZ_stat.py'
Nov 22 05:31:26 compute-0 sudo[122367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:27 compute-0 python3.9[122369]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:27 compute-0 sudo[122367]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:27 compute-0 sudo[122445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adlizkpnnedavhmrjugvlgmuhvarbesn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789486.6110272-244-199413707449426/AnsiballZ_file.py'
Nov 22 05:31:27 compute-0 sudo[122445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:27 compute-0 python3.9[122447]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:27 compute-0 sudo[122445]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 22 05:31:28 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 22 05:31:28 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 22 05:31:28 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 22 05:31:28 compute-0 sudo[122597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdnzytqqgcddjamwokcbqxtavndmiahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789488.3569329-262-90119151900761/AnsiballZ_lineinfile.py'
Nov 22 05:31:28 compute-0 sudo[122597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:28 compute-0 ceph-mon[75840]: pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:28 compute-0 ceph-mon[75840]: 10.13 scrub starts
Nov 22 05:31:28 compute-0 ceph-mon[75840]: 10.13 scrub ok
Nov 22 05:31:29 compute-0 python3.9[122599]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:29 compute-0 sudo[122597]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:29 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 22 05:31:29 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 22 05:31:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:29 compute-0 ceph-mon[75840]: 8.1a scrub starts
Nov 22 05:31:29 compute-0 ceph-mon[75840]: 8.1a scrub ok
Nov 22 05:31:29 compute-0 ceph-mon[75840]: 9.19 scrub starts
Nov 22 05:31:29 compute-0 ceph-mon[75840]: 9.19 scrub ok
Nov 22 05:31:29 compute-0 sudo[122749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgpizngywhycdlswvnznhejzteedsnvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789489.650702-277-6188649657079/AnsiballZ_setup.py'
Nov 22 05:31:29 compute-0 sudo[122749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:30 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 22 05:31:30 compute-0 python3.9[122751]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:31:30 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 22 05:31:30 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 22 05:31:30 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 22 05:31:30 compute-0 sudo[122749]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:30 compute-0 ceph-mon[75840]: pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:30 compute-0 ceph-mon[75840]: 10.10 scrub starts
Nov 22 05:31:30 compute-0 ceph-mon[75840]: 10.10 scrub ok
Nov 22 05:31:31 compute-0 sudo[122833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssisvfdiymmhookimcquwlerrmgqusyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789489.650702-277-6188649657079/AnsiballZ_systemd.py'
Nov 22 05:31:31 compute-0 sudo[122833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:31 compute-0 python3.9[122835]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:31:31 compute-0 sudo[122833]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:31 compute-0 ceph-mon[75840]: 9.11 scrub starts
Nov 22 05:31:31 compute-0 ceph-mon[75840]: 9.11 scrub ok
Nov 22 05:31:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:32 compute-0 sshd-session[118322]: Connection closed by 192.168.122.30 port 50394
Nov 22 05:31:32 compute-0 sshd-session[118319]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:31:32 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 22 05:31:32 compute-0 systemd[1]: session-37.scope: Consumed 27.460s CPU time.
Nov 22 05:31:32 compute-0 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Nov 22 05:31:32 compute-0 systemd-logind[798]: Removed session 37.
Nov 22 05:31:32 compute-0 ceph-mon[75840]: pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:33 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 22 05:31:33 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 22 05:31:33 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 22 05:31:33 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 22 05:31:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:33 compute-0 ceph-mon[75840]: 10.6 scrub starts
Nov 22 05:31:33 compute-0 ceph-mon[75840]: 10.6 scrub ok
Nov 22 05:31:34 compute-0 ceph-mon[75840]: 9.9 scrub starts
Nov 22 05:31:34 compute-0 ceph-mon[75840]: 9.9 scrub ok
Nov 22 05:31:34 compute-0 ceph-mon[75840]: pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:35 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 22 05:31:35 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 22 05:31:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:36 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 22 05:31:36 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 22 05:31:36 compute-0 ceph-mon[75840]: 9.3 scrub starts
Nov 22 05:31:36 compute-0 ceph-mon[75840]: 9.3 scrub ok
Nov 22 05:31:36 compute-0 ceph-mon[75840]: pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:36 compute-0 ceph-mon[75840]: 10.2 scrub starts
Nov 22 05:31:36 compute-0 ceph-mon[75840]: 10.2 scrub ok
Nov 22 05:31:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:37 compute-0 sshd-session[122863]: Accepted publickey for zuul from 192.168.122.30 port 36414 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:31:37 compute-0 systemd-logind[798]: New session 38 of user zuul.
Nov 22 05:31:37 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 22 05:31:37 compute-0 sshd-session[122863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:31:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:38 compute-0 sudo[123016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqdjgmveejvbutwsrotzruxakuungiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789497.5249612-22-199129098281835/AnsiballZ_file.py'
Nov 22 05:31:38 compute-0 sudo[123016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:38 compute-0 python3.9[123018]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:38 compute-0 sudo[123016]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:38 compute-0 ceph-mon[75840]: pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:39 compute-0 sudo[123168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jioqurcwqrjllsqlcczzkdmlxzgeahzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789498.5588276-34-150182774461120/AnsiballZ_stat.py'
Nov 22 05:31:39 compute-0 sudo[123168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:39 compute-0 python3.9[123170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:39 compute-0 sudo[123168]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:39 compute-0 sudo[123246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmyxcsvxguwuttlqobvbzxuvjbqfsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789498.5588276-34-150182774461120/AnsiballZ_file.py'
Nov 22 05:31:39 compute-0 sudo[123246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:39 compute-0 python3.9[123248]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:39 compute-0 sudo[123246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:40 compute-0 sshd-session[122866]: Connection closed by 192.168.122.30 port 36414
Nov 22 05:31:40 compute-0 sshd-session[122863]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:31:40 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 22 05:31:40 compute-0 systemd[1]: session-38.scope: Consumed 1.916s CPU time.
Nov 22 05:31:40 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Nov 22 05:31:40 compute-0 systemd-logind[798]: Removed session 38.
Nov 22 05:31:40 compute-0 ceph-mon[75840]: pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:41 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 22 05:31:41 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 22 05:31:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 22 05:31:42 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 22 05:31:42 compute-0 ceph-mon[75840]: 9.1 scrub starts
Nov 22 05:31:42 compute-0 ceph-mon[75840]: 9.1 scrub ok
Nov 22 05:31:42 compute-0 ceph-mon[75840]: pgmap v327: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:31:43
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta']
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:31:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:31:43 compute-0 ceph-mon[75840]: 9.1d scrub starts
Nov 22 05:31:43 compute-0 ceph-mon[75840]: 9.1d scrub ok
Nov 22 05:31:44 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 22 05:31:44 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 22 05:31:44 compute-0 ceph-mon[75840]: pgmap v328: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:45 compute-0 sshd-session[123274]: Accepted publickey for zuul from 192.168.122.30 port 36416 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:31:45 compute-0 systemd-logind[798]: New session 39 of user zuul.
Nov 22 05:31:45 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 22 05:31:45 compute-0 sshd-session[123274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:31:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 22 05:31:45 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 22 05:31:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:45 compute-0 ceph-mon[75840]: 9.5 scrub starts
Nov 22 05:31:45 compute-0 ceph-mon[75840]: 9.5 scrub ok
Nov 22 05:31:46 compute-0 python3.9[123427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:31:46 compute-0 ceph-mon[75840]: 10.b scrub starts
Nov 22 05:31:46 compute-0 ceph-mon[75840]: 10.b scrub ok
Nov 22 05:31:46 compute-0 ceph-mon[75840]: pgmap v329: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:47 compute-0 sudo[123581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbcuwkutrdybeseswyomjzrtclzwypv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789506.936017-33-202546671459658/AnsiballZ_file.py'
Nov 22 05:31:47 compute-0 sudo[123581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:47 compute-0 python3.9[123583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:47 compute-0 sudo[123581]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:48 compute-0 sudo[123756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caodggvjughwcwadwimbligazplfptil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789507.9133677-41-197057900017065/AnsiballZ_stat.py'
Nov 22 05:31:48 compute-0 sudo[123756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:48 compute-0 python3.9[123758]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:48 compute-0 sudo[123756]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:49 compute-0 ceph-mon[75840]: pgmap v330: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:49 compute-0 sudo[123834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqiugaprduqkhvauqcxnatbqkbmlgdab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789507.9133677-41-197057900017065/AnsiballZ_file.py'
Nov 22 05:31:49 compute-0 sudo[123834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:49 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 22 05:31:49 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 22 05:31:49 compute-0 python3.9[123836]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.xol36ikt recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:49 compute-0 sudo[123834]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Nov 22 05:31:49 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Nov 22 05:31:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 22 05:31:50 compute-0 sudo[123960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:50 compute-0 sudo[123960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:50 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 22 05:31:50 compute-0 sudo[123960]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:50 compute-0 sudo[124011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jafudacnnztkkvzsrrsaommlriqimabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789509.8026175-61-265369373866390/AnsiballZ_stat.py'
Nov 22 05:31:50 compute-0 sudo[124011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:50 compute-0 sudo[124012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:31:50 compute-0 sudo[124012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:50 compute-0 sudo[124012]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:50 compute-0 sudo[124039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:50 compute-0 sudo[124039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:50 compute-0 sudo[124039]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:50 compute-0 python3.9[124019]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:50 compute-0 sudo[124064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:31:50 compute-0 sudo[124064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:50 compute-0 sudo[124011]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 22 05:31:50 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 22 05:31:50 compute-0 sudo[124177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtyympcemnpfbluvutibocaceminskva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789509.8026175-61-265369373866390/AnsiballZ_file.py'
Nov 22 05:31:50 compute-0 sudo[124177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:50 compute-0 sudo[124064]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:50 compute-0 python3.9[124179]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.26yh0ct8 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:50 compute-0 sudo[124177]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:31:51 compute-0 ceph-mon[75840]: 9.b scrub starts
Nov 22 05:31:51 compute-0 ceph-mon[75840]: 9.b scrub ok
Nov 22 05:31:51 compute-0 ceph-mon[75840]: 10.f deep-scrub starts
Nov 22 05:31:51 compute-0 ceph-mon[75840]: 10.f deep-scrub ok
Nov 22 05:31:51 compute-0 ceph-mon[75840]: pgmap v331: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:51 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 409b92e4-ab82-479d-b863-407fc4c02643 does not exist
Nov 22 05:31:51 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1f2ae8c9-1dd5-4056-89cf-5333632f8e55 does not exist
Nov 22 05:31:51 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a5fe3c2d-e31b-414f-a0f1-8a4af065417d does not exist
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:31:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:31:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:31:51 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 22 05:31:51 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 22 05:31:51 compute-0 sudo[124214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:51 compute-0 sudo[124214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:51 compute-0 sudo[124214]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:51 compute-0 sudo[124246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:31:51 compute-0 sudo[124246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:51 compute-0 sudo[124246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:51 compute-0 sudo[124294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:51 compute-0 sudo[124294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:51 compute-0 sudo[124294]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:51 compute-0 sudo[124348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:31:51 compute-0 sudo[124348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:51 compute-0 sudo[124458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktuqnewwmbwpvezgktidilkliucsskzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789511.239933-74-67903165176727/AnsiballZ_file.py'
Nov 22 05:31:51 compute-0 sudo[124458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:51 compute-0 python3.9[124462]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.799735139 +0000 UTC m=+0.052936831 container create dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:31:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:51 compute-0 sudo[124458]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:51 compute-0 systemd[1]: Started libpod-conmon-dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61.scope.
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.77997315 +0000 UTC m=+0.033174862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.902986765 +0000 UTC m=+0.156188467 container init dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.912501789 +0000 UTC m=+0.165703511 container start dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.916404588 +0000 UTC m=+0.169606290 container attach dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:31:51 compute-0 thirsty_wiles[124506]: 167 167
Nov 22 05:31:51 compute-0 systemd[1]: libpod-dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61.scope: Deactivated successfully.
Nov 22 05:31:51 compute-0 conmon[124506]: conmon dcb393ab380e9e47a4bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61.scope/container/memory.events
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.922291361 +0000 UTC m=+0.175493083 container died dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-befe5db03310cea82fb20562a87bcae08f0dfb0e8a1ce9fb88fb9330f1e4728e-merged.mount: Deactivated successfully.
Nov 22 05:31:51 compute-0 podman[124489]: 2025-11-22 05:31:51.981168526 +0000 UTC m=+0.234370218 container remove dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:31:52 compute-0 systemd[1]: libpod-conmon-dcb393ab380e9e47a4bf5a0131a7a1e5883f45a48f3b8c9fd49df77f83951c61.scope: Deactivated successfully.
Nov 22 05:31:52 compute-0 ceph-mon[75840]: 9.d scrub starts
Nov 22 05:31:52 compute-0 ceph-mon[75840]: 9.d scrub ok
Nov 22 05:31:52 compute-0 ceph-mon[75840]: 10.19 deep-scrub starts
Nov 22 05:31:52 compute-0 ceph-mon[75840]: 10.19 deep-scrub ok
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:31:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:31:52 compute-0 ceph-mon[75840]: 9.1b scrub starts
Nov 22 05:31:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:52 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 22 05:31:52 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 22 05:31:52 compute-0 podman[124608]: 2025-11-22 05:31:52.240192457 +0000 UTC m=+0.071171807 container create e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:31:52 compute-0 systemd[1]: Started libpod-conmon-e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f.scope.
Nov 22 05:31:52 compute-0 podman[124608]: 2025-11-22 05:31:52.212178679 +0000 UTC m=+0.043158079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:52 compute-0 podman[124608]: 2025-11-22 05:31:52.358234794 +0000 UTC m=+0.189214204 container init e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:31:52 compute-0 podman[124608]: 2025-11-22 05:31:52.372927352 +0000 UTC m=+0.203906712 container start e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:31:52 compute-0 podman[124608]: 2025-11-22 05:31:52.377448837 +0000 UTC m=+0.208428187 container attach e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:31:52 compute-0 sudo[124703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvpooauvixkybdxwsdtcgoixkbfvjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789512.0629816-82-205500415386221/AnsiballZ_stat.py'
Nov 22 05:31:52 compute-0 sudo[124703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:31:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:31:52 compute-0 python3.9[124705]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:52 compute-0 sudo[124703]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 sudo[124784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-letqekrppyskrzbfrsinzswhhfkiwjgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789512.0629816-82-205500415386221/AnsiballZ_file.py'
Nov 22 05:31:53 compute-0 sudo[124784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:53 compute-0 ceph-mon[75840]: 9.1b scrub ok
Nov 22 05:31:53 compute-0 ceph-mon[75840]: pgmap v332: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 22 05:31:53 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 22 05:31:53 compute-0 python3.9[124787]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:53 compute-0 sudo[124784]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 laughing_jang[124651]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:31:53 compute-0 laughing_jang[124651]: --> relative data size: 1.0
Nov 22 05:31:53 compute-0 laughing_jang[124651]: --> All data devices are unavailable
Nov 22 05:31:53 compute-0 systemd[1]: libpod-e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f.scope: Deactivated successfully.
Nov 22 05:31:53 compute-0 systemd[1]: libpod-e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f.scope: Consumed 1.087s CPU time.
Nov 22 05:31:53 compute-0 podman[124608]: 2025-11-22 05:31:53.553870767 +0000 UTC m=+1.384850147 container died e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d5568185a79311f536a4c9d9a439ee395152c2b1b2958a31d7f83b1e008dc45-merged.mount: Deactivated successfully.
Nov 22 05:31:53 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 22 05:31:53 compute-0 podman[124608]: 2025-11-22 05:31:53.635808942 +0000 UTC m=+1.466788292 container remove e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:31:53 compute-0 systemd[1]: libpod-conmon-e5b298647db094fb04fc030ce92f4cf1c584268a42c0a4a7feff52ae1f77415f.scope: Deactivated successfully.
Nov 22 05:31:53 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 22 05:31:53 compute-0 sudo[124348]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 sudo[124943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:53 compute-0 sudo[124943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:53 compute-0 sudo[124943]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 sudo[124993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofgkvbplhjbbunytfmtpcohwlsktqsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789513.3973434-82-236471022453534/AnsiballZ_stat.py'
Nov 22 05:31:53 compute-0 sudo[124993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:53 compute-0 sudo[124995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:31:53 compute-0 sudo[124995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:53 compute-0 sudo[124995]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 sudo[125022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:53 compute-0 sudo[125022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:53 compute-0 sudo[125022]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:53 compute-0 sudo[125047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:31:53 compute-0 sudo[125047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:53 compute-0 python3.9[125001]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:53 compute-0 sudo[124993]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:54 compute-0 ceph-mon[75840]: 9.16 scrub starts
Nov 22 05:31:54 compute-0 ceph-mon[75840]: 9.16 scrub ok
Nov 22 05:31:54 compute-0 sudo[125193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmvtqwghiuuqjzfmxgtsecmtaxklopbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789513.3973434-82-236471022453534/AnsiballZ_file.py'
Nov 22 05:31:54 compute-0 sudo[125193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.314931956 +0000 UTC m=+0.048498348 container create f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:31:54 compute-0 systemd[1]: Started libpod-conmon-f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de.scope.
Nov 22 05:31:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.292859853 +0000 UTC m=+0.026426305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.405104599 +0000 UTC m=+0.138671031 container init f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.416986619 +0000 UTC m=+0.150552981 container start f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.419980482 +0000 UTC m=+0.153546914 container attach f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:31:54 compute-0 affectionate_mccarthy[125204]: 167 167
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.424266581 +0000 UTC m=+0.157832933 container died f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:31:54 compute-0 systemd[1]: libpod-f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de.scope: Deactivated successfully.
Nov 22 05:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-55470bef1a4bfd8ce39f0fc47362763b55a8b5c948147c26704c5982941f4ea4-merged.mount: Deactivated successfully.
Nov 22 05:31:54 compute-0 podman[125171]: 2025-11-22 05:31:54.46311727 +0000 UTC m=+0.196683612 container remove f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mccarthy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:31:54 compute-0 python3.9[125201]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:31:54 compute-0 sudo[125193]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:54 compute-0 systemd[1]: libpod-conmon-f23971d78daf21cf24a53c0488bed3508c79636c2be862c901a8342a47aed6de.scope: Deactivated successfully.
Nov 22 05:31:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 22 05:31:54 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 22 05:31:54 compute-0 podman[125252]: 2025-11-22 05:31:54.718092889 +0000 UTC m=+0.077209325 container create 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:31:54 compute-0 systemd[1]: Started libpod-conmon-55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1.scope.
Nov 22 05:31:54 compute-0 podman[125252]: 2025-11-22 05:31:54.684052274 +0000 UTC m=+0.043168770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb5624650fc318db7b2962acb623f14a026b960e1c25f9b3939a7a2b9b8360a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb5624650fc318db7b2962acb623f14a026b960e1c25f9b3939a7a2b9b8360a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb5624650fc318db7b2962acb623f14a026b960e1c25f9b3939a7a2b9b8360a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb5624650fc318db7b2962acb623f14a026b960e1c25f9b3939a7a2b9b8360a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:54 compute-0 podman[125252]: 2025-11-22 05:31:54.831030254 +0000 UTC m=+0.190146750 container init 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:31:54 compute-0 podman[125252]: 2025-11-22 05:31:54.845343802 +0000 UTC m=+0.204460218 container start 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:31:54 compute-0 podman[125252]: 2025-11-22 05:31:54.8492598 +0000 UTC m=+0.208376296 container attach 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:31:55 compute-0 ceph-mon[75840]: 9.1c scrub starts
Nov 22 05:31:55 compute-0 ceph-mon[75840]: 9.1c scrub ok
Nov 22 05:31:55 compute-0 ceph-mon[75840]: 10.12 scrub starts
Nov 22 05:31:55 compute-0 ceph-mon[75840]: 10.12 scrub ok
Nov 22 05:31:55 compute-0 ceph-mon[75840]: pgmap v333: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:55 compute-0 sudo[125398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrawljvobfggnxrqxieedkshoipoitxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789514.7178628-105-165103716306812/AnsiballZ_file.py'
Nov 22 05:31:55 compute-0 sudo[125398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:55 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 22 05:31:55 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 22 05:31:55 compute-0 python3.9[125400]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:55 compute-0 sudo[125398]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]: {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     "0": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "devices": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "/dev/loop3"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             ],
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_name": "ceph_lv0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_size": "21470642176",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "name": "ceph_lv0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "tags": {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_name": "ceph",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.crush_device_class": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.encrypted": "0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_id": "0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.vdo": "0"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             },
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "vg_name": "ceph_vg0"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         }
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     ],
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     "1": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "devices": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "/dev/loop4"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             ],
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_name": "ceph_lv1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_size": "21470642176",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "name": "ceph_lv1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "tags": {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_name": "ceph",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.crush_device_class": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.encrypted": "0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_id": "1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.vdo": "0"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             },
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "vg_name": "ceph_vg1"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         }
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     ],
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     "2": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "devices": [
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "/dev/loop5"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             ],
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_name": "ceph_lv2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_size": "21470642176",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "name": "ceph_lv2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "tags": {
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.cluster_name": "ceph",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.crush_device_class": "",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.encrypted": "0",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osd_id": "2",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:                 "ceph.vdo": "0"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             },
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "type": "block",
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:             "vg_name": "ceph_vg2"
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:         }
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]:     ]
Nov 22 05:31:55 compute-0 dreamy_murdock[125309]: }
Nov 22 05:31:55 compute-0 systemd[1]: libpod-55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1.scope: Deactivated successfully.
Nov 22 05:31:55 compute-0 podman[125252]: 2025-11-22 05:31:55.667520297 +0000 UTC m=+1.026636793 container died 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb5624650fc318db7b2962acb623f14a026b960e1c25f9b3939a7a2b9b8360a-merged.mount: Deactivated successfully.
Nov 22 05:31:55 compute-0 podman[125252]: 2025-11-22 05:31:55.74291303 +0000 UTC m=+1.102029486 container remove 55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_murdock, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:31:55 compute-0 systemd[1]: libpod-conmon-55cc744efd457011a868c447945fac2858510b1c85d187f2709cee7f1da755e1.scope: Deactivated successfully.
Nov 22 05:31:55 compute-0 sudo[125047]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:55 compute-0 sudo[125533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:55 compute-0 sudo[125533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:55 compute-0 sudo[125533]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:55 compute-0 sudo[125604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtgmjzdracglvfgaczevkvpeniowlkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789515.5615966-113-46380399737696/AnsiballZ_stat.py'
Nov 22 05:31:55 compute-0 sudo[125604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:55 compute-0 sudo[125580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:31:55 compute-0 sudo[125580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:55 compute-0 sudo[125580]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:56 compute-0 sudo[125619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:56 compute-0 sudo[125619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:56 compute-0 sudo[125619]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:56 compute-0 ceph-mon[75840]: 10.11 scrub starts
Nov 22 05:31:56 compute-0 ceph-mon[75840]: 10.11 scrub ok
Nov 22 05:31:56 compute-0 python3.9[125616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:56 compute-0 sudo[125644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:31:56 compute-0 sudo[125644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:56 compute-0 sudo[125604]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:56 compute-0 sudo[125783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgrganuvwgmdxbkunpfemlvdbbyfuxbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789515.5615966-113-46380399737696/AnsiballZ_file.py'
Nov 22 05:31:56 compute-0 sudo[125783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.553523814 +0000 UTC m=+0.055705668 container create 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:31:56 compute-0 systemd[1]: Started libpod-conmon-455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90.scope.
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.536033058 +0000 UTC m=+0.038214952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.671099598 +0000 UTC m=+0.173281492 container init 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.678798441 +0000 UTC m=+0.180980325 container start 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.683683578 +0000 UTC m=+0.185865522 container attach 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:31:56 compute-0 happy_goldstine[125805]: 167 167
Nov 22 05:31:56 compute-0 systemd[1]: libpod-455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90.scope: Deactivated successfully.
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.687443181 +0000 UTC m=+0.189625035 container died 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3be917fd18cb02bc07927a0c2f20e6a222e406a1f43789ffaf544f31ebf56a-merged.mount: Deactivated successfully.
Nov 22 05:31:56 compute-0 python3.9[125789]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:56 compute-0 podman[125787]: 2025-11-22 05:31:56.729599282 +0000 UTC m=+0.231781136 container remove 455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:31:56 compute-0 systemd[1]: libpod-conmon-455ba79e73d723444cdd0ceef9c6942683f25ba0d766b953e10336c657874a90.scope: Deactivated successfully.
Nov 22 05:31:56 compute-0 sudo[125783]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:56 compute-0 podman[125854]: 2025-11-22 05:31:56.963865636 +0000 UTC m=+0.071686741 container create 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:31:57 compute-0 podman[125854]: 2025-11-22 05:31:56.936529907 +0000 UTC m=+0.044351082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:31:57 compute-0 systemd[1]: Started libpod-conmon-61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f.scope.
Nov 22 05:31:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503b9c4c3b74cfb9389f640112553cc38570a89a1ec47dcaa94653ebce491dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503b9c4c3b74cfb9389f640112553cc38570a89a1ec47dcaa94653ebce491dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503b9c4c3b74cfb9389f640112553cc38570a89a1ec47dcaa94653ebce491dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/503b9c4c3b74cfb9389f640112553cc38570a89a1ec47dcaa94653ebce491dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:31:57 compute-0 ceph-mon[75840]: 9.1e scrub starts
Nov 22 05:31:57 compute-0 ceph-mon[75840]: 9.1e scrub ok
Nov 22 05:31:57 compute-0 ceph-mon[75840]: pgmap v334: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:57 compute-0 podman[125854]: 2025-11-22 05:31:57.101275151 +0000 UTC m=+0.209096316 container init 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:31:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:31:57 compute-0 podman[125854]: 2025-11-22 05:31:57.115845895 +0000 UTC m=+0.223667010 container start 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:31:57 compute-0 podman[125854]: 2025-11-22 05:31:57.119946569 +0000 UTC m=+0.227767734 container attach 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:31:57 compute-0 sudo[126002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulqhrnfuqucuoltcncxjvjbragvhjxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789516.9699264-125-233800964209288/AnsiballZ_stat.py'
Nov 22 05:31:57 compute-0 sudo[126002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:57 compute-0 python3.9[126004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:31:57 compute-0 sudo[126002]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:57 compute-0 sshd-session[125975]: Invalid user shreded from 80.94.92.166 port 48704
Nov 22 05:31:57 compute-0 sudo[126086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahgcecilvfxvpkboohiycyzujbvedbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789516.9699264-125-233800964209288/AnsiballZ_file.py'
Nov 22 05:31:57 compute-0 sudo[126086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:57 compute-0 sshd-session[125975]: Connection closed by invalid user shreded 80.94.92.166 port 48704 [preauth]
Nov 22 05:31:58 compute-0 goofy_robinson[125912]: {
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_id": 1,
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "type": "bluestore"
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     },
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_id": 2,
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "type": "bluestore"
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     },
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_id": 0,
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:         "type": "bluestore"
Nov 22 05:31:58 compute-0 goofy_robinson[125912]:     }
Nov 22 05:31:58 compute-0 goofy_robinson[125912]: }
Nov 22 05:31:58 compute-0 systemd[1]: libpod-61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f.scope: Deactivated successfully.
Nov 22 05:31:58 compute-0 systemd[1]: libpod-61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f.scope: Consumed 1.039s CPU time.
Nov 22 05:31:58 compute-0 python3.9[126090]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:31:58 compute-0 podman[125854]: 2025-11-22 05:31:58.147889577 +0000 UTC m=+1.255710682 container died 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:31:58 compute-0 sudo[126086]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-503b9c4c3b74cfb9389f640112553cc38570a89a1ec47dcaa94653ebce491dc0-merged.mount: Deactivated successfully.
Nov 22 05:31:58 compute-0 podman[125854]: 2025-11-22 05:31:58.209713653 +0000 UTC m=+1.317534728 container remove 61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_robinson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:31:58 compute-0 systemd[1]: libpod-conmon-61b1cbd16aedec6730cd2a4c09c59c4b2aa97db4962e9e7b8b9dd4e8d5167a4f.scope: Deactivated successfully.
Nov 22 05:31:58 compute-0 sudo[125644]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:31:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:31:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f01d4a92-41b3-4cd7-8e51-589f6196c2f9 does not exist
Nov 22 05:31:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a53c136f-2b45-4f2c-8110-5638f67b25b1 does not exist
Nov 22 05:31:58 compute-0 sudo[126147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:31:58 compute-0 sudo[126147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:58 compute-0 sudo[126147]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:58 compute-0 sudo[126189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:31:58 compute-0 sudo[126189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:31:58 compute-0 sudo[126189]: pam_unix(sudo:session): session closed for user root
Nov 22 05:31:59 compute-0 sudo[126322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-comoyzabkdxzjkggpgqzopazmggfdgsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789518.3831637-137-214531381832021/AnsiballZ_systemd.py'
Nov 22 05:31:59 compute-0 sudo[126322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:31:59 compute-0 ceph-mon[75840]: pgmap v335: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:31:59 compute-0 python3.9[126324]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:31:59 compute-0 systemd[1]: Reloading.
Nov 22 05:31:59 compute-0 systemd-rc-local-generator[126352]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:31:59 compute-0 systemd-sysv-generator[126355]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:31:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 22 05:31:59 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 22 05:31:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:31:59 compute-0 sudo[126322]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:00 compute-0 sudo[126511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvzapdyotjhwqrogwunvwrvdoympybj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789520.0606818-145-224985033759402/AnsiballZ_stat.py'
Nov 22 05:32:00 compute-0 sudo[126511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:00 compute-0 python3.9[126513]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:00 compute-0 sudo[126511]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 22 05:32:00 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 22 05:32:01 compute-0 sudo[126589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhudgamlvpwcobmmrwtxazpffyxcizcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789520.0606818-145-224985033759402/AnsiballZ_file.py'
Nov 22 05:32:01 compute-0 sudo[126589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:01 compute-0 python3.9[126591]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:01 compute-0 sudo[126589]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:01 compute-0 ceph-mon[75840]: 10.1a scrub starts
Nov 22 05:32:01 compute-0 ceph-mon[75840]: 10.1a scrub ok
Nov 22 05:32:01 compute-0 ceph-mon[75840]: pgmap v336: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 22 05:32:01 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 22 05:32:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:01 compute-0 sudo[126741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpadfahiobpxalzbdqbuijqfpfeahvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789521.449712-157-258818930841374/AnsiballZ_stat.py'
Nov 22 05:32:01 compute-0 sudo[126741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:02 compute-0 python3.9[126743]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:02 compute-0 sudo[126741]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:02 compute-0 sudo[126819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edyhvlzbzcgebknmclfutckqupqrxuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789521.449712-157-258818930841374/AnsiballZ_file.py'
Nov 22 05:32:02 compute-0 sudo[126819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:02 compute-0 ceph-mon[75840]: 10.14 deep-scrub starts
Nov 22 05:32:02 compute-0 ceph-mon[75840]: 10.14 deep-scrub ok
Nov 22 05:32:02 compute-0 python3.9[126821]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:02 compute-0 sudo[126819]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 22 05:32:02 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 22 05:32:03 compute-0 sudo[126971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxgjhfiphpdtrvuvukeofxchfyjwrvsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789522.7569752-169-109098897884116/AnsiballZ_systemd.py'
Nov 22 05:32:03 compute-0 sudo[126971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:03 compute-0 ceph-mon[75840]: 9.15 scrub starts
Nov 22 05:32:03 compute-0 ceph-mon[75840]: 9.15 scrub ok
Nov 22 05:32:03 compute-0 ceph-mon[75840]: pgmap v337: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:03 compute-0 python3.9[126973]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:32:03 compute-0 systemd[1]: Reloading.
Nov 22 05:32:03 compute-0 systemd-sysv-generator[127005]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:32:03 compute-0 systemd-rc-local-generator[126999]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:32:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:03 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 05:32:03 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 05:32:03 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 05:32:03 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 05:32:03 compute-0 sudo[126971]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:04 compute-0 ceph-mon[75840]: 9.1f scrub starts
Nov 22 05:32:04 compute-0 ceph-mon[75840]: 9.1f scrub ok
Nov 22 05:32:04 compute-0 python3.9[127166]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:32:04 compute-0 network[127183]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:32:04 compute-0 network[127184]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:32:04 compute-0 network[127185]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:32:05 compute-0 ceph-mon[75840]: pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:07 compute-0 ceph-mon[75840]: pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:09 compute-0 ceph-mon[75840]: pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:09 compute-0 sudo[127445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsgrfgtpvoxngzmpervptumwckyyncls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789529.2958229-195-243006275876515/AnsiballZ_stat.py'
Nov 22 05:32:09 compute-0 sudo[127445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:09 compute-0 python3.9[127447]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:09 compute-0 sudo[127445]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:10 compute-0 sudo[127523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwrqokzpnarupiohsishlfuyechsesm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789529.2958229-195-243006275876515/AnsiballZ_file.py'
Nov 22 05:32:10 compute-0 sudo[127523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:10 compute-0 python3.9[127525]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:10 compute-0 sudo[127523]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:11 compute-0 sudo[127675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykgvqboxsukttkullwszcpnpktxrbacz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789530.7974331-208-68237950519933/AnsiballZ_file.py'
Nov 22 05:32:11 compute-0 sudo[127675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:11 compute-0 python3.9[127677]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:11 compute-0 sudo[127675]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:11 compute-0 ceph-mon[75840]: pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:12 compute-0 sudo[127827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjyebnzifvlkguxsdkeshkfpruzcwdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789531.6278615-216-203796435967193/AnsiballZ_stat.py'
Nov 22 05:32:12 compute-0 sudo[127827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:12 compute-0 python3.9[127829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:12 compute-0 sudo[127827]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:12 compute-0 sudo[127905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchdqruyiocorgmzcixaxizepagvinrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789531.6278615-216-203796435967193/AnsiballZ_file.py'
Nov 22 05:32:12 compute-0 sudo[127905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:12 compute-0 python3.9[127907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:12 compute-0 sudo[127905]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:13 compute-0 ceph-mon[75840]: pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:13 compute-0 sudo[128057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rncicdfsmlikmfxsbgzinixkaddwnqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789533.2203867-231-179564494652462/AnsiballZ_timezone.py'
Nov 22 05:32:13 compute-0 sudo[128057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:13 compute-0 python3.9[128059]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 05:32:14 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 05:32:14 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 05:32:14 compute-0 sudo[128057]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:14 compute-0 sudo[128213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psvbdutgpktvoshpfaqozcihorwnfjsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789534.584979-240-121530525456920/AnsiballZ_file.py'
Nov 22 05:32:14 compute-0 sudo[128213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:15 compute-0 python3.9[128215]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:15 compute-0 sudo[128213]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:15 compute-0 ceph-mon[75840]: pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:15 compute-0 sudo[128365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkwvnlfchcboybvkmmmxqqpnikzbuutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789535.4774075-248-165274993392021/AnsiballZ_stat.py'
Nov 22 05:32:15 compute-0 sudo[128365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:16 compute-0 python3.9[128367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:16 compute-0 sudo[128365]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:16 compute-0 sudo[128443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxnogjtswhfrhzahpoetdxhvpmbnqgfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789535.4774075-248-165274993392021/AnsiballZ_file.py'
Nov 22 05:32:16 compute-0 sudo[128443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:16 compute-0 python3.9[128445]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:16 compute-0 sudo[128443]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:17 compute-0 sudo[128595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzkiiionixfygdcxzaxiapdagmaiwdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789536.9116268-260-201764669113941/AnsiballZ_stat.py'
Nov 22 05:32:17 compute-0 sudo[128595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:17 compute-0 ceph-mon[75840]: pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:17 compute-0 python3.9[128597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:17 compute-0 sudo[128595]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:17 compute-0 sudo[128673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knmtwgfwzmknvmaqbybbmcehfhndwifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789536.9116268-260-201764669113941/AnsiballZ_file.py'
Nov 22 05:32:17 compute-0 sudo[128673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:18 compute-0 python3.9[128675]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._5ykghkg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:18 compute-0 sudo[128673]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:18 compute-0 sudo[128826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opfiyzaashdiuoqoymeecpxfcyzyjdbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789538.2766912-272-259068974151717/AnsiballZ_stat.py'
Nov 22 05:32:18 compute-0 sudo[128826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:18 compute-0 python3.9[128828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:18 compute-0 sudo[128826]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:19 compute-0 sudo[128904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khuqpjkgifykdowjmdndfslnkyyngyzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789538.2766912-272-259068974151717/AnsiballZ_file.py'
Nov 22 05:32:19 compute-0 sudo[128904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:19 compute-0 python3.9[128906]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:19 compute-0 sudo[128904]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:19 compute-0 ceph-mon[75840]: pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:20 compute-0 sudo[129056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohgvxvzzaywgzisildzmmyxqfcptjgnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789539.6087298-285-67085696970187/AnsiballZ_command.py'
Nov 22 05:32:20 compute-0 sudo[129056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:20 compute-0 python3.9[129058]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:32:20 compute-0 sudo[129056]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:20 compute-0 sudo[129209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjuwimouryoxfogggsnfrkiinnvrhoyk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789540.4667988-293-143358978391549/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 05:32:20 compute-0 sudo[129209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:21 compute-0 python3[129211]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 05:32:21 compute-0 sudo[129209]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:21 compute-0 ceph-mon[75840]: pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:21 compute-0 sudo[129361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmkytuumvtemrczcvneyrrsgerczngoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789541.4039197-301-56167364570587/AnsiballZ_stat.py'
Nov 22 05:32:21 compute-0 sudo[129361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:21 compute-0 python3.9[129363]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:21 compute-0 sudo[129361]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:22 compute-0 sudo[129439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwbtbffwrqdsayfdxpjvztukenwtpaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789541.4039197-301-56167364570587/AnsiballZ_file.py'
Nov 22 05:32:22 compute-0 sudo[129439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:22 compute-0 python3.9[129441]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:22 compute-0 sudo[129439]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:23 compute-0 sudo[129591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evpdhcrqdvoohvvajwoiypbbyqaayvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789542.7582119-313-242501033414127/AnsiballZ_stat.py'
Nov 22 05:32:23 compute-0 sudo[129591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:23 compute-0 python3.9[129593]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:23 compute-0 sudo[129591]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:23 compute-0 ceph-mon[75840]: pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:23 compute-0 sudo[129669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgswqkcferdsmulhcunwrwcbxrymiofz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789542.7582119-313-242501033414127/AnsiballZ_file.py'
Nov 22 05:32:23 compute-0 sudo[129669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:23 compute-0 python3.9[129671]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:23 compute-0 sudo[129669]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:24 compute-0 sudo[129821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oslnbacxkndwftpppmeqneqmjxmocdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789544.1050284-325-141150857038737/AnsiballZ_stat.py'
Nov 22 05:32:24 compute-0 sudo[129821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:24 compute-0 python3.9[129823]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:24 compute-0 sudo[129821]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:25 compute-0 sudo[129899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwljhfswehznactyuwrhvclhocqiqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789544.1050284-325-141150857038737/AnsiballZ_file.py'
Nov 22 05:32:25 compute-0 sudo[129899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:25 compute-0 python3.9[129901]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:25 compute-0 sudo[129899]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:25 compute-0 ceph-mon[75840]: pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:26 compute-0 sudo[130051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vssmcvotuftycpcvfgiuzflwkoeyxfts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789545.5922308-337-136579333469024/AnsiballZ_stat.py'
Nov 22 05:32:26 compute-0 sudo[130051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:26 compute-0 python3.9[130053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:26 compute-0 sudo[130051]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:26 compute-0 sudo[130129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbdjsjfccnrdccajhajyfzvgbzckdlmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789545.5922308-337-136579333469024/AnsiballZ_file.py'
Nov 22 05:32:26 compute-0 sudo[130129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:26 compute-0 python3.9[130131]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:26 compute-0 sudo[130129]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:27 compute-0 ceph-mon[75840]: pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:27 compute-0 sudo[130281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unajaeddpyupssctzpzqdskmepnlsiuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789547.0038853-349-93488847530275/AnsiballZ_stat.py'
Nov 22 05:32:27 compute-0 sudo[130281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:27 compute-0 python3.9[130283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:27 compute-0 sudo[130281]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:28 compute-0 sudo[130359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foifznnlujhhzagjpmdemciqrlikqgtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789547.0038853-349-93488847530275/AnsiballZ_file.py'
Nov 22 05:32:28 compute-0 sudo[130359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:28 compute-0 python3.9[130361]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:28 compute-0 sudo[130359]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:28 compute-0 sudo[130511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrjhcltpfhwvtmwsgysduykgvewabryq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789548.5036638-362-63012379590426/AnsiballZ_command.py'
Nov 22 05:32:28 compute-0 sudo[130511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:28 compute-0 python3.9[130513]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:32:29 compute-0 sudo[130511]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:29 compute-0 ceph-mon[75840]: pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:29 compute-0 sudo[130666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdgynmhbythdxfrioaohtjeoklyzhrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789549.2596529-370-105442260238939/AnsiballZ_blockinfile.py'
Nov 22 05:32:29 compute-0 sudo[130666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:30 compute-0 python3.9[130668]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:30 compute-0 sudo[130666]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:30 compute-0 sudo[130818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjezhmyzrhruzudtcfxzfhynqxypdsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789550.2860708-379-172051667895914/AnsiballZ_file.py'
Nov 22 05:32:30 compute-0 sudo[130818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:30 compute-0 python3.9[130820]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:30 compute-0 sudo[130818]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:31 compute-0 sudo[130970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypeqnqvwhxxufmmtejlqhtuxnmapujli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789551.0911608-379-45927371767179/AnsiballZ_file.py'
Nov 22 05:32:31 compute-0 sudo[130970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:31 compute-0 ceph-mon[75840]: pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:31 compute-0 python3.9[130972]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:31 compute-0 sudo[130970]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:32 compute-0 sudo[131122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niuxaywphdghhqucwpppvmxawtauqner ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789551.9579213-394-243951135868666/AnsiballZ_mount.py'
Nov 22 05:32:32 compute-0 sudo[131122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:32 compute-0 python3.9[131124]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 05:32:32 compute-0 sudo[131122]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:33 compute-0 sudo[131276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefugduugzsjcwmoxnowilmwmiwyhagm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789553.0700433-394-137584392992085/AnsiballZ_mount.py'
Nov 22 05:32:33 compute-0 sudo[131276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:33 compute-0 ceph-mon[75840]: pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:33 compute-0 python3.9[131278]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 05:32:33 compute-0 sudo[131276]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:34 compute-0 sshd-session[123277]: Connection closed by 192.168.122.30 port 36416
Nov 22 05:32:34 compute-0 sshd-session[123274]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:32:34 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 22 05:32:34 compute-0 systemd[1]: session-39.scope: Consumed 36.372s CPU time.
Nov 22 05:32:34 compute-0 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Nov 22 05:32:34 compute-0 systemd-logind[798]: Removed session 39.
Nov 22 05:32:34 compute-0 ceph-mon[75840]: pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:36 compute-0 ceph-mon[75840]: pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:38 compute-0 sshd-session[131125]: Connection closed by authenticating user root 123.253.22.30 port 51760 [preauth]
Nov 22 05:32:38 compute-0 ceph-mon[75840]: pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:40 compute-0 sshd-session[131303]: Accepted publickey for zuul from 192.168.122.30 port 55764 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:32:40 compute-0 systemd-logind[798]: New session 40 of user zuul.
Nov 22 05:32:40 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 22 05:32:40 compute-0 sshd-session[131303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:32:40 compute-0 ceph-mon[75840]: pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:40 compute-0 sudo[131456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevhlfbctvbpgbfvrmfidsvqiduaaltj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789560.3525407-16-258059839876541/AnsiballZ_tempfile.py'
Nov 22 05:32:40 compute-0 sudo[131456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:41 compute-0 python3.9[131458]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 05:32:41 compute-0 sudo[131456]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:41 compute-0 sudo[131608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndquxbhjtbfzljykuqkddiffneemubi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789561.3614378-28-112757891676932/AnsiballZ_stat.py'
Nov 22 05:32:41 compute-0 sudo[131608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:42 compute-0 python3.9[131610]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:32:42 compute-0 sudo[131608]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:42 compute-0 ceph-mon[75840]: pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:42 compute-0 sudo[131762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmekuyclcxamwgstuyubwzmnnjxsvldl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789562.394753-36-185606186027644/AnsiballZ_slurp.py'
Nov 22 05:32:42 compute-0 sudo[131762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:43 compute-0 python3.9[131764]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 22 05:32:43 compute-0 sudo[131762]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:32:43
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.log']
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:32:43 compute-0 sudo[131914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsgtbzwrpxhdtrviepxabbexaimhsga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789563.3833303-44-221185150607419/AnsiballZ_stat.py'
Nov 22 05:32:43 compute-0 sudo[131914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:32:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:32:43 compute-0 python3.9[131916]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.ihtgg0ut follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:32:44 compute-0 sudo[131914]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:44 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 05:32:44 compute-0 sudo[132041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrvnhurtjslfziyluqosfyfgqxufzil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789563.3833303-44-221185150607419/AnsiballZ_copy.py'
Nov 22 05:32:44 compute-0 sudo[132041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:44 compute-0 python3.9[132043]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.ihtgg0ut mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789563.3833303-44-221185150607419/.source.ihtgg0ut _original_basename=.9ta2m4od follow=False checksum=5019c716591402e135ee98940de480aa18c2e44a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:44 compute-0 sudo[132041]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:44 compute-0 ceph-mon[75840]: pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:45 compute-0 sudo[132193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ychhyhoiduqbemfvxxqqcrtckijxgfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789565.0310373-59-8823300084045/AnsiballZ_setup.py'
Nov 22 05:32:45 compute-0 sudo[132193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:45 compute-0 python3.9[132195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:32:45 compute-0 sudo[132193]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:46 compute-0 sudo[132345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwvvjvgmeyrwwiqcqfdpjgapizfqomk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789566.2370098-68-38508770079541/AnsiballZ_blockinfile.py'
Nov 22 05:32:46 compute-0 sudo[132345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:46 compute-0 ceph-mon[75840]: pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:46 compute-0 python3.9[132347]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCit8LB4kN4s+ZkWj80X2HgMN9rqM53DLp82j+iZT/+7rzt4hXyml/QRwnRtRuhiMmFC20M8IvUEbNi1zKVVkcoHO/p5QkECCjKHEn1MqPis5D+QZQrGTeDLDkMrhuE8Pw5y61lJ5qm3EI6GZDRrUGmuVCEeJh9jpUQQ+8LlojrWycpo0svG9DIb8mUq1I1nCK8CeVIHkhCTc+F7OhSzzKJQHl5RrVX/K9kH0ak//kwjPdbyIHnB8JaTqci/DJPmcm4GxKKRNVErCrY3DBZNFCBt8iwjWu4MrqLv3iFLufwFed9mnoqLvVJGR8kDpmCdEKpNs8k6fls3xtt9j7NHMXOf4Xio2n+e3iS0eOEjoIKs/UMbDlHH7hqO/lx7Yv3YLgQtef4crGkOWxGILX2eOs5/1d6lgIzp04lzLy2oPlyJGb8bCwGvRMwojZNUO91mQkoO5vDssg6huJ8lBEWfxr8rao78xnahRc+m7sCEtI5n1VTqXAor62Z67+PFALoyi0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG3PVl+DJPXhnIIicPnX2nTw410SH80rkcpaBLgvWfvA
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBITMmQ16+iCw0/ZG0kuxaDVundusiLycQm50s7cZraLscE8RlmDWnFcRh+jIhL0lLGEyvuocxAlG/xRmMEF3zf8=
                                              create=True mode=0644 path=/tmp/ansible.ihtgg0ut state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:46 compute-0 sudo[132345]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:47 compute-0 sudo[132497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axmjdupyvytkshrvxibxllelayzonqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789567.1857417-76-260292163424620/AnsiballZ_command.py'
Nov 22 05:32:47 compute-0 sudo[132497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:47 compute-0 python3.9[132499]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.ihtgg0ut' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:32:47 compute-0 sudo[132497]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:48 compute-0 sudo[132651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsgbivjyvqowbkxyhovjvgsecyowjsdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789568.2040243-84-31099951398860/AnsiballZ_file.py'
Nov 22 05:32:48 compute-0 sudo[132651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:48 compute-0 python3.9[132653]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.ihtgg0ut state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:32:48 compute-0 sudo[132651]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:48 compute-0 ceph-mon[75840]: pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:49 compute-0 sshd-session[131306]: Connection closed by 192.168.122.30 port 55764
Nov 22 05:32:49 compute-0 sshd-session[131303]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:32:49 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Nov 22 05:32:49 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 22 05:32:49 compute-0 systemd[1]: session-40.scope: Consumed 6.275s CPU time.
Nov 22 05:32:49 compute-0 systemd-logind[798]: Removed session 40.
Nov 22 05:32:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:51 compute-0 ceph-mon[75840]: pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:32:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:32:53 compute-0 ceph-mon[75840]: pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:55 compute-0 ceph-mon[75840]: pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:55 compute-0 sshd-session[132678]: Accepted publickey for zuul from 192.168.122.30 port 56536 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:32:55 compute-0 systemd-logind[798]: New session 41 of user zuul.
Nov 22 05:32:55 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 22 05:32:55 compute-0 sshd-session[132678]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:32:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:56 compute-0 python3.9[132831]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:32:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:32:57 compute-0 ceph-mon[75840]: pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:57 compute-0 sudo[132985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbnyeucinmzytdzhvpvfdfokvfhhlors ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789576.8528101-32-127215385762040/AnsiballZ_systemd.py'
Nov 22 05:32:57 compute-0 sudo[132985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:57 compute-0 python3.9[132987]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 05:32:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:57 compute-0 sudo[132985]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:58 compute-0 sudo[133147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhkydxkbbywepxjarpgawhcxpzxmwirn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789578.1035397-40-233784851225451/AnsiballZ_systemd.py'
Nov 22 05:32:58 compute-0 sudo[133147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:58 compute-0 sudo[133130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:32:58 compute-0 sudo[133130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:58 compute-0 sudo[133130]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:58 compute-0 sudo[133167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:32:58 compute-0 sudo[133167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:58 compute-0 sudo[133167]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:58 compute-0 sudo[133192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:32:58 compute-0 sudo[133192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:58 compute-0 sudo[133192]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:58 compute-0 sudo[133217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:32:58 compute-0 sudo[133217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:58 compute-0 python3.9[133164]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:32:58 compute-0 sudo[133147]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 ceph-mon[75840]: pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:59 compute-0 sudo[133217]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:32:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 71a82913-a0ff-4a24-affe-e27619f4ebb7 does not exist
Nov 22 05:32:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 59a34d04-c1d4-48cd-9ac0-615508557de1 does not exist
Nov 22 05:32:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4275f6e5-cb03-483c-826d-2afdad8e54ca does not exist
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:32:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:32:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:32:59 compute-0 sudo[133349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:32:59 compute-0 sudo[133349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:59 compute-0 sudo[133349]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 sudo[133386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:32:59 compute-0 sudo[133386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:59 compute-0 sudo[133386]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 sudo[133428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:32:59 compute-0 sudo[133428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:59 compute-0 sudo[133428]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 sudo[133518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzjwjgaktmviuucwldoskoxnbueblpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789579.0738406-49-13851816868881/AnsiballZ_command.py'
Nov 22 05:32:59 compute-0 sudo[133518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:32:59 compute-0 sudo[133477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:32:59 compute-0 sudo[133477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:32:59 compute-0 python3.9[133522]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:32:59 compute-0 sudo[133518]: pam_unix(sudo:session): session closed for user root
Nov 22 05:32:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:32:59 compute-0 podman[133589]: 2025-11-22 05:32:59.965698569 +0000 UTC m=+0.060441579 container create 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:33:00 compute-0 systemd[1]: Started libpod-conmon-3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8.scope.
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:32:59.943174387 +0000 UTC m=+0.037917427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:33:00.063684711 +0000 UTC m=+0.158427781 container init 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:33:00.07510653 +0000 UTC m=+0.169849540 container start 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:33:00.079599368 +0000 UTC m=+0.174342418 container attach 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:33:00 compute-0 fervent_poitras[133629]: 167 167
Nov 22 05:33:00 compute-0 systemd[1]: libpod-3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8.scope: Deactivated successfully.
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:33:00.082232487 +0000 UTC m=+0.176975477 container died 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-867ebde3549ea8358fca5892f3398241f4e05a7e6b687ca46087c0bb10263c85-merged.mount: Deactivated successfully.
Nov 22 05:33:00 compute-0 podman[133589]: 2025-11-22 05:33:00.124038995 +0000 UTC m=+0.218782015 container remove 3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:33:00 compute-0 systemd[1]: libpod-conmon-3e2c2423d577d917c0f404a2533c94148b5a040c8cef2ba01836c12a959178f8.scope: Deactivated successfully.
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:33:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:33:00 compute-0 podman[133681]: 2025-11-22 05:33:00.343380492 +0000 UTC m=+0.072194126 container create 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:33:00 compute-0 systemd[1]: Started libpod-conmon-722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725.scope.
Nov 22 05:33:00 compute-0 podman[133681]: 2025-11-22 05:33:00.316381093 +0000 UTC m=+0.045194777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:00 compute-0 sshd-session[72106]: Received disconnect from 38.102.83.69 port 60252:11: disconnected by user
Nov 22 05:33:00 compute-0 sshd-session[72106]: Disconnected from user zuul 38.102.83.69 port 60252
Nov 22 05:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:00 compute-0 sshd-session[72103]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:33:00 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 22 05:33:00 compute-0 systemd[1]: session-17.scope: Consumed 1min 28.580s CPU time.
Nov 22 05:33:00 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Nov 22 05:33:00 compute-0 systemd-logind[798]: Removed session 17.
Nov 22 05:33:00 compute-0 podman[133681]: 2025-11-22 05:33:00.442011551 +0000 UTC m=+0.170825255 container init 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:33:00 compute-0 podman[133681]: 2025-11-22 05:33:00.455759352 +0000 UTC m=+0.184572996 container start 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:33:00 compute-0 podman[133681]: 2025-11-22 05:33:00.45987078 +0000 UTC m=+0.188684464 container attach 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:33:00 compute-0 sudo[133776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxmcluefcwfbgpitgqaoovrrrawzhbac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789579.9997826-57-262227483600434/AnsiballZ_stat.py'
Nov 22 05:33:00 compute-0 sudo[133776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:00 compute-0 python3.9[133778]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:33:00 compute-0 sudo[133776]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 ceph-mon[75840]: pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:01 compute-0 sudo[133948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ombvvgexxlndkqbtcpahlgsqaxhkpmdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789580.9790432-66-106438231697060/AnsiballZ_file.py'
Nov 22 05:33:01 compute-0 sudo[133948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:01 compute-0 suspicious_mclean[133734]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:33:01 compute-0 suspicious_mclean[133734]: --> relative data size: 1.0
Nov 22 05:33:01 compute-0 suspicious_mclean[133734]: --> All data devices are unavailable
Nov 22 05:33:01 compute-0 systemd[1]: libpod-722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725.scope: Deactivated successfully.
Nov 22 05:33:01 compute-0 podman[133681]: 2025-11-22 05:33:01.612338472 +0000 UTC m=+1.341152106 container died 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:33:01 compute-0 systemd[1]: libpod-722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725.scope: Consumed 1.095s CPU time.
Nov 22 05:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9ca08d87b6f144f43c9136c2fdfe3974c9a79d7dadf9c4943808632cc69589f-merged.mount: Deactivated successfully.
Nov 22 05:33:01 compute-0 podman[133681]: 2025-11-22 05:33:01.676388882 +0000 UTC m=+1.405202486 container remove 722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:33:01 compute-0 systemd[1]: libpod-conmon-722b5e5228e9b04d2b7cdb3c9289397917ef0186d7619b5c8c61d11902bf0725.scope: Deactivated successfully.
Nov 22 05:33:01 compute-0 sudo[133477]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 sudo[133970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:33:01 compute-0 sudo[133970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:01 compute-0 sudo[133970]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 python3.9[133952]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:01 compute-0 sudo[133948]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 sudo[133995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:33:01 compute-0 sudo[133995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:01 compute-0 sudo[133995]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:01 compute-0 sudo[134037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:33:01 compute-0 sudo[134037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:01 compute-0 sudo[134037]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:01 compute-0 sudo[134069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:33:01 compute-0 sudo[134069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:02 compute-0 rsyslogd[1005]: imjournal: 1597 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 05:33:02 compute-0 sshd-session[132681]: Connection closed by 192.168.122.30 port 56536
Nov 22 05:33:02 compute-0 sshd-session[132678]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:33:02 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 22 05:33:02 compute-0 systemd[1]: session-41.scope: Consumed 4.532s CPU time.
Nov 22 05:33:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:02 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Nov 22 05:33:02 compute-0 systemd-logind[798]: Removed session 41.
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.297558868 +0000 UTC m=+0.052653513 container create 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:33:02 compute-0 systemd[1]: Started libpod-conmon-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope.
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.27096844 +0000 UTC m=+0.026063135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.396464175 +0000 UTC m=+0.151558870 container init 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.409090656 +0000 UTC m=+0.164185311 container start 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.413584224 +0000 UTC m=+0.168678929 container attach 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:33:02 compute-0 dazzling_roentgen[134153]: 167 167
Nov 22 05:33:02 compute-0 systemd[1]: libpod-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope: Deactivated successfully.
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.4176302 +0000 UTC m=+0.172724875 container died 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f7d6ac9c4f398636a5b7416a6431e91881f43560031ae51a01f45a764d4d734-merged.mount: Deactivated successfully.
Nov 22 05:33:02 compute-0 podman[134136]: 2025-11-22 05:33:02.465335982 +0000 UTC m=+0.220430627 container remove 5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:33:02 compute-0 systemd[1]: libpod-conmon-5fd248e7f333163507eef72406739a18e7be86fe1e2a550a8fc961ca7e9762c1.scope: Deactivated successfully.
Nov 22 05:33:02 compute-0 podman[134177]: 2025-11-22 05:33:02.706086682 +0000 UTC m=+0.072716500 container create fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:33:02 compute-0 systemd[1]: Started libpod-conmon-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope.
Nov 22 05:33:02 compute-0 podman[134177]: 2025-11-22 05:33:02.67746738 +0000 UTC m=+0.044097208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:02 compute-0 podman[134177]: 2025-11-22 05:33:02.820407192 +0000 UTC m=+0.187037050 container init fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:33:02 compute-0 podman[134177]: 2025-11-22 05:33:02.831563715 +0000 UTC m=+0.198193523 container start fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:33:02 compute-0 podman[134177]: 2025-11-22 05:33:02.835460838 +0000 UTC m=+0.202090656 container attach fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:33:03 compute-0 ceph-mon[75840]: pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:03 compute-0 clever_kepler[134193]: {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     "0": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "devices": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "/dev/loop3"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             ],
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_name": "ceph_lv0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_size": "21470642176",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "name": "ceph_lv0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "tags": {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_name": "ceph",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.crush_device_class": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.encrypted": "0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_id": "0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.vdo": "0"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             },
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "vg_name": "ceph_vg0"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         }
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     ],
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     "1": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "devices": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "/dev/loop4"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             ],
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_name": "ceph_lv1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_size": "21470642176",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "name": "ceph_lv1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "tags": {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_name": "ceph",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.crush_device_class": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.encrypted": "0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_id": "1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.vdo": "0"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             },
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "vg_name": "ceph_vg1"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         }
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     ],
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     "2": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "devices": [
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "/dev/loop5"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             ],
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_name": "ceph_lv2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_size": "21470642176",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "name": "ceph_lv2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "tags": {
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.cluster_name": "ceph",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.crush_device_class": "",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.encrypted": "0",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osd_id": "2",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:                 "ceph.vdo": "0"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             },
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "type": "block",
Nov 22 05:33:03 compute-0 clever_kepler[134193]:             "vg_name": "ceph_vg2"
Nov 22 05:33:03 compute-0 clever_kepler[134193]:         }
Nov 22 05:33:03 compute-0 clever_kepler[134193]:     ]
Nov 22 05:33:03 compute-0 clever_kepler[134193]: }
Nov 22 05:33:03 compute-0 systemd[1]: libpod-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope: Deactivated successfully.
Nov 22 05:33:03 compute-0 podman[134177]: 2025-11-22 05:33:03.60588034 +0000 UTC m=+0.972510208 container died fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-210f6c2a33c704f7609e12f094adbdc81aa14dcbd3424e3aabe067eaee8067bd-merged.mount: Deactivated successfully.
Nov 22 05:33:03 compute-0 podman[134177]: 2025-11-22 05:33:03.667520899 +0000 UTC m=+1.034150707 container remove fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kepler, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:33:03 compute-0 systemd[1]: libpod-conmon-fa35a08800af62b7a791907c5bcc4e6da781826be3833800f141f2f1e3aa7257.scope: Deactivated successfully.
Nov 22 05:33:03 compute-0 sudo[134069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:03 compute-0 sudo[134213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:33:03 compute-0 sudo[134213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:03 compute-0 sudo[134213]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:03 compute-0 sudo[134238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:33:03 compute-0 sudo[134238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:03 compute-0 sudo[134238]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:03 compute-0 sudo[134263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:33:03 compute-0 sudo[134263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:03 compute-0 sudo[134263]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:04 compute-0 sudo[134288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:33:04 compute-0 sudo[134288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.433115455 +0000 UTC m=+0.044493799 container create 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:33:04 compute-0 systemd[1]: Started libpod-conmon-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope.
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.412603517 +0000 UTC m=+0.023981871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.531997291 +0000 UTC m=+0.143375695 container init 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.544431568 +0000 UTC m=+0.155809922 container start 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.548431703 +0000 UTC m=+0.159810097 container attach 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:33:04 compute-0 intelligent_babbage[134370]: 167 167
Nov 22 05:33:04 compute-0 systemd[1]: libpod-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope: Deactivated successfully.
Nov 22 05:33:04 compute-0 conmon[134370]: conmon 3bb523b6952f95c551aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope/container/memory.events
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.554060301 +0000 UTC m=+0.165438665 container died 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e1a96e6070c3e32d9cd7d595c017cb424086faa8230c57fac777d02859c3e6e-merged.mount: Deactivated successfully.
Nov 22 05:33:04 compute-0 podman[134354]: 2025-11-22 05:33:04.602645415 +0000 UTC m=+0.214023729 container remove 3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_babbage, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:33:04 compute-0 systemd[1]: libpod-conmon-3bb523b6952f95c551aac1a09cfbf34ac2701562ecd304944425f9814b080a5a.scope: Deactivated successfully.
Nov 22 05:33:04 compute-0 podman[134393]: 2025-11-22 05:33:04.791288008 +0000 UTC m=+0.054608855 container create 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:33:04 compute-0 systemd[1]: Started libpod-conmon-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope.
Nov 22 05:33:04 compute-0 podman[134393]: 2025-11-22 05:33:04.764066003 +0000 UTC m=+0.027386850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:33:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:33:04 compute-0 podman[134393]: 2025-11-22 05:33:04.885931572 +0000 UTC m=+0.149252469 container init 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:33:04 compute-0 podman[134393]: 2025-11-22 05:33:04.898983735 +0000 UTC m=+0.162304582 container start 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:33:04 compute-0 podman[134393]: 2025-11-22 05:33:04.903923795 +0000 UTC m=+0.167244622 container attach 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:33:05 compute-0 ceph-mon[75840]: pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]: {
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_id": 1,
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "type": "bluestore"
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     },
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_id": 2,
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "type": "bluestore"
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     },
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_id": 0,
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:         "type": "bluestore"
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]:     }
Nov 22 05:33:06 compute-0 vigorous_khayyam[134409]: }
Nov 22 05:33:06 compute-0 systemd[1]: libpod-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Deactivated successfully.
Nov 22 05:33:06 compute-0 systemd[1]: libpod-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Consumed 1.166s CPU time.
Nov 22 05:33:06 compute-0 podman[134393]: 2025-11-22 05:33:06.056129969 +0000 UTC m=+1.319450806 container died 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-97a010d457f4082ec8304dc3cd747b65763d2a79a519868cfb4bbcd9fcbc9808-merged.mount: Deactivated successfully.
Nov 22 05:33:06 compute-0 podman[134393]: 2025-11-22 05:33:06.135526773 +0000 UTC m=+1.398847570 container remove 64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:33:06 compute-0 systemd[1]: libpod-conmon-64e6d512ef3b46a56df0ded4ada0afbd605e576e43fcb598b4c365598de3d2d8.scope: Deactivated successfully.
Nov 22 05:33:06 compute-0 sudo[134288]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:33:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:33:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:33:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:33:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d964052e-96fe-43bb-92af-3bdb7e57fc71 does not exist
Nov 22 05:33:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 170145ac-a64c-4b60-9f8d-5502bd159b97 does not exist
Nov 22 05:33:06 compute-0 sudo[134453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:33:06 compute-0 sudo[134453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:06 compute-0 sudo[134453]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:06 compute-0 sudo[134478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:33:06 compute-0 sudo[134478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:33:06 compute-0 sudo[134478]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:07 compute-0 ceph-mon[75840]: pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:33:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:33:07 compute-0 sshd-session[134503]: Accepted publickey for zuul from 192.168.122.30 port 57588 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:33:07 compute-0 systemd-logind[798]: New session 42 of user zuul.
Nov 22 05:33:07 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 22 05:33:07 compute-0 sshd-session[134503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:33:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:08 compute-0 python3.9[134656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:33:09 compute-0 ceph-mon[75840]: pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:09 compute-0 sudo[134810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maoddamkqjnoprnrjlbzyyoyrpabuwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789588.9468026-34-189853086859772/AnsiballZ_setup.py'
Nov 22 05:33:09 compute-0 sudo[134810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:09 compute-0 python3.9[134812]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:33:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:09 compute-0 sudo[134810]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:10 compute-0 sudo[134894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpantmwckllvqudayhtlgsqsfhxxvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789588.9468026-34-189853086859772/AnsiballZ_dnf.py'
Nov 22 05:33:10 compute-0 sudo[134894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:10 compute-0 python3.9[134896]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 05:33:11 compute-0 ceph-mon[75840]: pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:11 compute-0 sudo[134894]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:12 compute-0 python3.9[135047]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:33:13 compute-0 ceph-mon[75840]: pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:14 compute-0 python3.9[135198]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:33:15 compute-0 python3.9[135348]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:33:15 compute-0 ceph-mon[75840]: pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:15 compute-0 python3.9[135498]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:33:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:16 compute-0 sshd-session[134506]: Connection closed by 192.168.122.30 port 57588
Nov 22 05:33:16 compute-0 sshd-session[134503]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:33:16 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 22 05:33:16 compute-0 systemd[1]: session-42.scope: Consumed 6.540s CPU time.
Nov 22 05:33:16 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Nov 22 05:33:16 compute-0 systemd-logind[798]: Removed session 42.
Nov 22 05:33:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:17 compute-0 ceph-mon[75840]: pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:19 compute-0 ceph-mon[75840]: pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:21 compute-0 sshd-session[135523]: Accepted publickey for zuul from 192.168.122.30 port 53966 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:33:21 compute-0 systemd-logind[798]: New session 43 of user zuul.
Nov 22 05:33:21 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 22 05:33:21 compute-0 sshd-session[135523]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:33:21 compute-0 ceph-mon[75840]: pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:22 compute-0 python3.9[135676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:33:23 compute-0 ceph-mon[75840]: pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:24 compute-0 sudo[135830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tztatjkdygtwmylhhlcrckfgmoaomhur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789603.5806818-50-168709337980941/AnsiballZ_file.py'
Nov 22 05:33:24 compute-0 sudo[135830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:24 compute-0 python3.9[135832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:24 compute-0 sudo[135830]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:24 compute-0 sudo[135982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hasvblompqnyszxqeedvlcgqyryqlnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789604.4574282-50-115429660680423/AnsiballZ_file.py'
Nov 22 05:33:24 compute-0 sudo[135982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:25 compute-0 python3.9[135984]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:25 compute-0 sudo[135982]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:25 compute-0 ceph-mon[75840]: pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:25 compute-0 sudo[136134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwmgrlrozocxpnrtbmtnwcaqegmqsaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789605.2980092-65-175876197547155/AnsiballZ_stat.py'
Nov 22 05:33:25 compute-0 sudo[136134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:26 compute-0 python3.9[136136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:26 compute-0 sudo[136134]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:26 compute-0 sudo[136257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smvbtyiscaillhfozbvwjtdcnjfesvrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789605.2980092-65-175876197547155/AnsiballZ_copy.py'
Nov 22 05:33:26 compute-0 sudo[136257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:26 compute-0 python3.9[136259]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789605.2980092-65-175876197547155/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=77951e7b50336235552278fe09afe27239a664e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:26 compute-0 sudo[136257]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:27 compute-0 ceph-mon[75840]: pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:27 compute-0 sudo[136409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqllhqppzqwjqxafwvhljputktumliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789607.0948963-65-96015583973532/AnsiballZ_stat.py'
Nov 22 05:33:27 compute-0 sudo[136409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:27 compute-0 python3.9[136411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:27 compute-0 sudo[136409]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:28 compute-0 sudo[136532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uurhczxbsahsmxptlhkaueawkoyxybwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789607.0948963-65-96015583973532/AnsiballZ_copy.py'
Nov 22 05:33:28 compute-0 sudo[136532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:28 compute-0 python3.9[136534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789607.0948963-65-96015583973532/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=cc36de41baf31cbc96876a4c978b53cf42c09af0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:28 compute-0 sudo[136532]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:28 compute-0 sudo[136684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scwhfadqykqpwhorakalhewmkyqplzop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789608.550052-65-185223896507610/AnsiballZ_stat.py'
Nov 22 05:33:28 compute-0 sudo[136684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:29 compute-0 python3.9[136686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:29 compute-0 sudo[136684]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:29 compute-0 ceph-mon[75840]: pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:29 compute-0 sudo[136807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtipupmiavukgurjkwmssvfoafpyasdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789608.550052-65-185223896507610/AnsiballZ_copy.py'
Nov 22 05:33:29 compute-0 sudo[136807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:29 compute-0 python3.9[136809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789608.550052-65-185223896507610/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=140c9cb76d53049a1cbc8d30c6f9b46f7bce8deb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:29 compute-0 sudo[136807]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:30 compute-0 sudo[136959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycaigohtxsvmykcfhbbawenmuccxasim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789609.975919-109-50036607665226/AnsiballZ_file.py'
Nov 22 05:33:30 compute-0 sudo[136959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:30 compute-0 python3.9[136961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:30 compute-0 sudo[136959]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:31 compute-0 sudo[137111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyfylsqkigsctaevfzjpoymynroigeuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789610.7412126-109-201734692747851/AnsiballZ_file.py'
Nov 22 05:33:31 compute-0 sudo[137111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:31 compute-0 ceph-mon[75840]: pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:31 compute-0 python3.9[137113]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:31 compute-0 sudo[137111]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:31 compute-0 sudo[137263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmiysgxkushmmzashplrkufcsqrlxasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789611.5777376-124-6411806704758/AnsiballZ_stat.py'
Nov 22 05:33:31 compute-0 sudo[137263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:32 compute-0 python3.9[137265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:32 compute-0 sudo[137263]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:32 compute-0 sudo[137386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyoeksvibawdptvwcoaofemhdctsexjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789611.5777376-124-6411806704758/AnsiballZ_copy.py'
Nov 22 05:33:32 compute-0 sudo[137386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:32 compute-0 python3.9[137388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789611.5777376-124-6411806704758/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8ebf332b73c2404b9ed7c58d624aeca8bfa0f3b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:32 compute-0 sudo[137386]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:33 compute-0 sudo[137538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueensmcmbywhytlzwbnqwrkzqfbrehfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789612.972755-124-262982423294656/AnsiballZ_stat.py'
Nov 22 05:33:33 compute-0 sudo[137538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:33 compute-0 ceph-mon[75840]: pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:33 compute-0 python3.9[137540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:33 compute-0 sudo[137538]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:33 compute-0 sudo[137661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htuszqdszqkrqhmwzdwqjgftqtvcocnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789612.972755-124-262982423294656/AnsiballZ_copy.py'
Nov 22 05:33:33 compute-0 sudo[137661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:34 compute-0 python3.9[137663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789612.972755-124-262982423294656/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fa92895d4da00e1b95bf94f57650d8e2b328cd80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:34 compute-0 sudo[137661]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:34 compute-0 sudo[137813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcmjxfgksaebkfpigenapsedtertsudn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789614.3956342-124-54353405290954/AnsiballZ_stat.py'
Nov 22 05:33:34 compute-0 sudo[137813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:35 compute-0 python3.9[137815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:35 compute-0 sudo[137813]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:35 compute-0 ceph-mon[75840]: pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:35 compute-0 sudo[137936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbyaurwsuacpjszqyjnqaxxkamemnvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789614.3956342-124-54353405290954/AnsiballZ_copy.py'
Nov 22 05:33:35 compute-0 sudo[137936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:35 compute-0 python3.9[137938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789614.3956342-124-54353405290954/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d1d78ab8a7a90917655f98d9b0e79cb5a7a3c349 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:35 compute-0 sudo[137936]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:36 compute-0 sudo[138088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkmhnvqrrcaohfscsgfpiukiwqrfkaiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789615.9658723-168-175403073340862/AnsiballZ_file.py'
Nov 22 05:33:36 compute-0 sudo[138088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:36 compute-0 python3.9[138090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:36 compute-0 sudo[138088]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.136255) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617136303, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1793, "num_deletes": 252, "total_data_size": 2587734, "memory_usage": 2635864, "flush_reason": "Manual Compaction"}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 22 05:33:37 compute-0 sudo[138240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgsxfirbqlnuexpvetcyrktaymtgctpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789616.7814076-168-94896695611809/AnsiballZ_file.py'
Nov 22 05:33:37 compute-0 sudo[138240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617150984, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1515194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7332, "largest_seqno": 9124, "table_properties": {"data_size": 1509313, "index_size": 2700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17300, "raw_average_key_size": 20, "raw_value_size": 1495308, "raw_average_value_size": 1810, "num_data_blocks": 127, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789449, "oldest_key_time": 1763789449, "file_creation_time": 1763789617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14808 microseconds, and 7886 cpu microseconds.
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.151060) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1515194 bytes OK
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.151088) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.152951) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.152977) EVENT_LOG_v1 {"time_micros": 1763789617152967, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.153003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2579779, prev total WAL file size 2579779, number of live WAL files 2.
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.154522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1479KB)], [20(6934KB)]
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617154592, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8616614, "oldest_snapshot_seqno": -1}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3426 keys, 6901522 bytes, temperature: kUnknown
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617201643, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6901522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6875162, "index_size": 16714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81861, "raw_average_key_size": 23, "raw_value_size": 6809806, "raw_average_value_size": 1987, "num_data_blocks": 741, "num_entries": 3426, "num_filter_entries": 3426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.201982) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6901522 bytes
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.203590) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 146.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.8 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(10.2) write-amplify(4.6) OK, records in: 3864, records dropped: 438 output_compression: NoCompression
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.203623) EVENT_LOG_v1 {"time_micros": 1763789617203607, "job": 6, "event": "compaction_finished", "compaction_time_micros": 47152, "compaction_time_cpu_micros": 31796, "output_level": 6, "num_output_files": 1, "total_output_size": 6901522, "num_input_records": 3864, "num_output_records": 3426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617204213, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789617206396, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.154371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:33:37.206541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:33:37 compute-0 ceph-mon[75840]: pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:37 compute-0 python3.9[138242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:37 compute-0 sudo[138240]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:38 compute-0 sudo[138392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxyhvpisxlogumzruouusiqonxbkxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789617.6379368-183-102035034263227/AnsiballZ_stat.py'
Nov 22 05:33:38 compute-0 sudo[138392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:38 compute-0 python3.9[138394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:38 compute-0 sudo[138392]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:38 compute-0 sudo[138515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejrnagauujsktthypvfdwjqjiededali ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789617.6379368-183-102035034263227/AnsiballZ_copy.py'
Nov 22 05:33:38 compute-0 sudo[138515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:38 compute-0 python3.9[138517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789617.6379368-183-102035034263227/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4ff65cfce7b98fb4d1bc2d32d002eedf36c3c536 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:38 compute-0 sudo[138515]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:39 compute-0 ceph-mon[75840]: pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:39 compute-0 sudo[138667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fttnrddkqxvfznbytnekhmpffhojviva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789619.05904-183-253288449109497/AnsiballZ_stat.py'
Nov 22 05:33:39 compute-0 sudo[138667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:39 compute-0 python3.9[138669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:39 compute-0 sudo[138667]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:40 compute-0 sudo[138790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epalfnznqbswwbyjetarahzkocvmjmgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789619.05904-183-253288449109497/AnsiballZ_copy.py'
Nov 22 05:33:40 compute-0 sudo[138790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:40 compute-0 python3.9[138792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789619.05904-183-253288449109497/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=fa92895d4da00e1b95bf94f57650d8e2b328cd80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:40 compute-0 sudo[138790]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:40 compute-0 sudo[138942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sinyfxqgdpohvrxvbfesonqawbvohzzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789620.5042253-183-105727498825125/AnsiballZ_stat.py'
Nov 22 05:33:40 compute-0 sudo[138942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:41 compute-0 python3.9[138944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:41 compute-0 sudo[138942]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:41 compute-0 ceph-mon[75840]: pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:41 compute-0 sudo[139065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dssdwjqbgzrekmhokfunujmcrupttotc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789620.5042253-183-105727498825125/AnsiballZ_copy.py'
Nov 22 05:33:41 compute-0 sudo[139065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:41 compute-0 python3.9[139067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789620.5042253-183-105727498825125/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6d3f64c209326b8bc026e1d9bfc49bdb76d81f14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:41 compute-0 sudo[139065]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:43 compute-0 sudo[139217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemgnrfnlbxpkbycpoomnutbpckwxnyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789622.6219375-243-157863062792199/AnsiballZ_file.py'
Nov 22 05:33:43 compute-0 sudo[139217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:43 compute-0 python3.9[139219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:43 compute-0 sudo[139217]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:43 compute-0 ceph-mon[75840]: pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:33:43
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:33:43 compute-0 sudo[139369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpdxtrasfsjohlkvemjujjcdqviwkvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789623.4509308-251-1747911200371/AnsiballZ_stat.py'
Nov 22 05:33:43 compute-0 sudo[139369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:33:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:44 compute-0 python3.9[139371]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:44 compute-0 sudo[139369]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:44 compute-0 sudo[139492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzyruqfmnnhaxyiqytgjasadtygnkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789623.4509308-251-1747911200371/AnsiballZ_copy.py'
Nov 22 05:33:44 compute-0 sudo[139492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:44 compute-0 python3.9[139494]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789623.4509308-251-1747911200371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:44 compute-0 sudo[139492]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:45 compute-0 sudo[139644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euyijjzuqhmnanuictsbdqfxeornvury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789624.9839442-267-28343200577141/AnsiballZ_file.py'
Nov 22 05:33:45 compute-0 sudo[139644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:45 compute-0 ceph-mon[75840]: pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:45 compute-0 python3.9[139646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:45 compute-0 sudo[139644]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:46 compute-0 sudo[139796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnnkzyiyagexemhgvtwraixocykzegu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789625.8758318-275-220002299610371/AnsiballZ_stat.py'
Nov 22 05:33:46 compute-0 sudo[139796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:46 compute-0 python3.9[139798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:46 compute-0 sudo[139796]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:46 compute-0 sudo[139919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkvhklnwxvpeatlyvqfehzrgpsuslsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789625.8758318-275-220002299610371/AnsiballZ_copy.py'
Nov 22 05:33:46 compute-0 sudo[139919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:47 compute-0 python3.9[139921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789625.8758318-275-220002299610371/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:47 compute-0 sudo[139919]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:47 compute-0 ceph-mon[75840]: pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:47 compute-0 sudo[140071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gszzhuswtopswexhkohixovacstfirze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789627.4517379-291-216730140151226/AnsiballZ_file.py'
Nov 22 05:33:47 compute-0 sudo[140071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:48 compute-0 python3.9[140073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:48 compute-0 sudo[140071]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:48 compute-0 sudo[140223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fozzlxgbtwxyjimazgarpsafpdrtmbkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789628.3483827-299-45977039908410/AnsiballZ_stat.py'
Nov 22 05:33:48 compute-0 sudo[140223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:48 compute-0 python3.9[140225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:48 compute-0 sudo[140223]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:49 compute-0 sudo[140346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusbbbvbkzhlglfwhxivwppffueblphg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789628.3483827-299-45977039908410/AnsiballZ_copy.py'
Nov 22 05:33:49 compute-0 sudo[140346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:49 compute-0 ceph-mon[75840]: pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:49 compute-0 python3.9[140348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789628.3483827-299-45977039908410/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:49 compute-0 sudo[140346]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:50 compute-0 sudo[140498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xehmepoboauwmfripjeqzqbqahjycngn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789629.9723675-315-67522521467437/AnsiballZ_file.py'
Nov 22 05:33:50 compute-0 sudo[140498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:50 compute-0 python3.9[140500]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:50 compute-0 sudo[140498]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:51 compute-0 sudo[140650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqskijahuslhfqaqnnmjckzyfukinsxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789630.7861557-323-164303567492929/AnsiballZ_stat.py'
Nov 22 05:33:51 compute-0 sudo[140650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:51 compute-0 python3.9[140652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:51 compute-0 sudo[140650]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:51 compute-0 ceph-mon[75840]: pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:51 compute-0 sudo[140773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhxzjkuknxmmzruqsapvavvqpndvvdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789630.7861557-323-164303567492929/AnsiballZ_copy.py'
Nov 22 05:33:51 compute-0 sudo[140773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:52 compute-0 python3.9[140775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789630.7861557-323-164303567492929/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:52 compute-0 sudo[140773]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:33:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:33:52 compute-0 sudo[140925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobhzrcrjstyslnggmvdaejahhevyhkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789632.3836958-339-244032475943116/AnsiballZ_file.py'
Nov 22 05:33:52 compute-0 sudo[140925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:52 compute-0 python3.9[140927]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:52 compute-0 sudo[140925]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:53 compute-0 ceph-mon[75840]: pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:53 compute-0 sudo[141077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbvypmebjzwtmbrqqxtadaarfefjmfww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789633.1954396-347-126884892009720/AnsiballZ_stat.py'
Nov 22 05:33:53 compute-0 sudo[141077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:53 compute-0 python3.9[141079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:53 compute-0 sudo[141077]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:54 compute-0 sudo[141200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkhylqxnqdhgzvmipejyfpmblpokhuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789633.1954396-347-126884892009720/AnsiballZ_copy.py'
Nov 22 05:33:54 compute-0 sudo[141200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:54 compute-0 python3.9[141202]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789633.1954396-347-126884892009720/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:54 compute-0 sudo[141200]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:55 compute-0 sudo[141352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytmplhbqgextkrpwkcvzfeihktvcwcrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789634.730579-363-10692902975552/AnsiballZ_file.py'
Nov 22 05:33:55 compute-0 sudo[141352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:55 compute-0 python3.9[141354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:33:55 compute-0 sudo[141352]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:55 compute-0 ceph-mon[75840]: pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:55 compute-0 sudo[141504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vndjprdrwwpqpxzxbnidgufzebfiytbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789635.53907-371-65632485830244/AnsiballZ_stat.py'
Nov 22 05:33:55 compute-0 sudo[141504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:56 compute-0 python3.9[141506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:33:56 compute-0 sudo[141504]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:56 compute-0 sudo[141627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxemvcfgprdokctjszdaicwjyhacabwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789635.53907-371-65632485830244/AnsiballZ_copy.py'
Nov 22 05:33:56 compute-0 sudo[141627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:33:56 compute-0 python3.9[141629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789635.53907-371-65632485830244/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e37b1fec5954b14a4e6484746957336ccb49759f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:33:56 compute-0 sudo[141627]: pam_unix(sudo:session): session closed for user root
Nov 22 05:33:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:33:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2062 writes, 9128 keys, 2062 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2062 writes, 2062 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2062 writes, 9128 keys, 2062 commit groups, 1.0 writes per commit group, ingest: 11.08 MB, 0.02 MB/s
                                           Interval WAL: 2062 writes, 2062 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    116.8      0.07              0.03         3    0.024       0      0       0.0       0.0
                                             L6      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    174.7    155.2      0.09              0.05         2    0.043    7192    728       0.0       0.0
                                            Sum      1/0    6.58 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     95.9    137.9      0.16              0.08         5    0.031    7192    728       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    102.2    146.6      0.15              0.08         4    0.037    7192    728       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    174.7    155.2      0.09              0.05         2    0.043    7192    728       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    134.5      0.06              0.03         2    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 308.00 MB usage: 541.33 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,450.11 KB,0.142714%) FilterBlock(6,27.80 KB,0.00881344%) IndexBlock(6,63.42 KB,0.0201089%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 05:33:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:33:57 compute-0 sshd-session[135526]: Connection closed by 192.168.122.30 port 53966
Nov 22 05:33:57 compute-0 sshd-session[135523]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:33:57 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 22 05:33:57 compute-0 systemd[1]: session-43.scope: Consumed 28.379s CPU time.
Nov 22 05:33:57 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Nov 22 05:33:57 compute-0 systemd-logind[798]: Removed session 43.
Nov 22 05:33:57 compute-0 ceph-mon[75840]: pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:59 compute-0 ceph-mon[75840]: pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:33:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:01 compute-0 ceph-mon[75840]: pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:03 compute-0 sshd-session[141654]: Accepted publickey for zuul from 192.168.122.30 port 53830 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:34:03 compute-0 systemd-logind[798]: New session 44 of user zuul.
Nov 22 05:34:03 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 22 05:34:03 compute-0 sshd-session[141654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:34:03 compute-0 ceph-mon[75840]: pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:03 compute-0 sudo[141807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czrizvryuqobnwcccisnkogeqzldwtui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789643.3533525-22-259853057759615/AnsiballZ_file.py'
Nov 22 05:34:03 compute-0 sudo[141807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:04 compute-0 python3.9[141809]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:04 compute-0 sudo[141807]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:04 compute-0 sudo[141959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emadhsdqrkvjxckfscxjezvffqaamitg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789644.3873174-34-261050912144302/AnsiballZ_stat.py'
Nov 22 05:34:04 compute-0 sudo[141959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:05 compute-0 python3.9[141961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:05 compute-0 sudo[141959]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:05 compute-0 ceph-mon[75840]: pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:05 compute-0 sudo[142082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fovideevmpkhrtomzsdgbacpoccerezk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789644.3873174-34-261050912144302/AnsiballZ_copy.py'
Nov 22 05:34:05 compute-0 sudo[142082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:05 compute-0 python3.9[142084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789644.3873174-34-261050912144302/.source.conf _original_basename=ceph.conf follow=False checksum=1263ab632842a96ecd941a91f52f1b587861adae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:05 compute-0 sudo[142082]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:06 compute-0 sudo[142192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:06 compute-0 sudo[142192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:06 compute-0 sudo[142192]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:06 compute-0 sudo[142279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrgpilqbkcrorxppfrtofdrnjtgmvkka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789646.1903982-34-185735434447221/AnsiballZ_stat.py'
Nov 22 05:34:06 compute-0 sudo[142279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:06 compute-0 sudo[142241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:34:06 compute-0 sudo[142241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:06 compute-0 sudo[142241]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:06 compute-0 sudo[142287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:06 compute-0 sudo[142287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:06 compute-0 sudo[142287]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:06 compute-0 sudo[142312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:34:06 compute-0 sudo[142312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:06 compute-0 python3.9[142285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:06 compute-0 sudo[142279]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:07 compute-0 sudo[142478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqltcfcjatwwcatfefpdjyxhnjknsqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789646.1903982-34-185735434447221/AnsiballZ_copy.py'
Nov 22 05:34:07 compute-0 sudo[142478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:07 compute-0 sudo[142312]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e584d636-a0e0-4275-9ba0-5e3653f6238e does not exist
Nov 22 05:34:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0796c27e-9c1d-47d3-8e88-ffe2b2e280b7 does not exist
Nov 22 05:34:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 87390f8c-e5f3-494e-9444-b45a323d0bf1 does not exist
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:34:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:34:07 compute-0 sudo[142491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:07 compute-0 sudo[142491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:07 compute-0 sudo[142491]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 python3.9[142488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789646.1903982-34-185735434447221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=18cfea5729768871b1211ef73b57421c54974f8f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:07 compute-0 sudo[142478]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 sudo[142516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:34:07 compute-0 sudo[142516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:07 compute-0 sudo[142516]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 ceph-mon[75840]: pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:34:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:34:07 compute-0 sudo[142550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:07 compute-0 sudo[142550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:07 compute-0 sudo[142550]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:07 compute-0 sudo[142590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:34:07 compute-0 sudo[142590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:07 compute-0 sshd-session[141657]: Connection closed by 192.168.122.30 port 53830
Nov 22 05:34:07 compute-0 sshd-session[141654]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:34:07 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Nov 22 05:34:07 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 22 05:34:07 compute-0 systemd[1]: session-44.scope: Consumed 3.306s CPU time.
Nov 22 05:34:07 compute-0 systemd-logind[798]: Removed session 44.
Nov 22 05:34:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.041560817 +0000 UTC m=+0.048407213 container create 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:34:08 compute-0 systemd[1]: Started libpod-conmon-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope.
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.015459831 +0000 UTC m=+0.022306317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.150422229 +0000 UTC m=+0.157268665 container init 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.165018943 +0000 UTC m=+0.171865369 container start 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.169064179 +0000 UTC m=+0.175910635 container attach 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:34:08 compute-0 dazzling_elbakyan[142670]: 167 167
Nov 22 05:34:08 compute-0 systemd[1]: libpod-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope: Deactivated successfully.
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.175335025 +0000 UTC m=+0.182181451 container died 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-da78836cb6b18e1d109916d1ba6a7f8d1ee64aedf729ae19b0d3866b9a3b6519-merged.mount: Deactivated successfully.
Nov 22 05:34:08 compute-0 podman[142654]: 2025-11-22 05:34:08.22650921 +0000 UTC m=+0.233355646 container remove 615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:34:08 compute-0 systemd[1]: libpod-conmon-615dfc5c9b9c2b8adf7d500a9872ed2bbbba8d26ecab786cb1de48ccefe899cb.scope: Deactivated successfully.
Nov 22 05:34:08 compute-0 podman[142694]: 2025-11-22 05:34:08.429930989 +0000 UTC m=+0.051049604 container create dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:34:08 compute-0 systemd[1]: Started libpod-conmon-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope.
Nov 22 05:34:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:08 compute-0 podman[142694]: 2025-11-22 05:34:08.489167826 +0000 UTC m=+0.110286441 container init dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:34:08 compute-0 podman[142694]: 2025-11-22 05:34:08.408275739 +0000 UTC m=+0.029394374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:08 compute-0 podman[142694]: 2025-11-22 05:34:08.502799175 +0000 UTC m=+0.123917790 container start dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:34:08 compute-0 podman[142694]: 2025-11-22 05:34:08.505350911 +0000 UTC m=+0.126469526 container attach dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:34:09 compute-0 ceph-mon[75840]: pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:09 compute-0 eager_jennings[142711]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:34:09 compute-0 eager_jennings[142711]: --> relative data size: 1.0
Nov 22 05:34:09 compute-0 eager_jennings[142711]: --> All data devices are unavailable
Nov 22 05:34:09 compute-0 systemd[1]: libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Deactivated successfully.
Nov 22 05:34:09 compute-0 systemd[1]: libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Consumed 1.059s CPU time.
Nov 22 05:34:09 compute-0 conmon[142711]: conmon dc159a4b2c8ca68abd13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope/container/memory.events
Nov 22 05:34:09 compute-0 podman[142694]: 2025-11-22 05:34:09.605726734 +0000 UTC m=+1.226845369 container died dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8f6f6191e26f9a4b57abba146deda98fa80d590da671404d2bc8c9be67f390-merged.mount: Deactivated successfully.
Nov 22 05:34:09 compute-0 podman[142694]: 2025-11-22 05:34:09.677785529 +0000 UTC m=+1.298904154 container remove dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jennings, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:34:09 compute-0 systemd[1]: libpod-conmon-dc159a4b2c8ca68abd1305977d9a1a1edbddbfa1208f493bda89dcabb230a1b0.scope: Deactivated successfully.
Nov 22 05:34:09 compute-0 sudo[142590]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:09 compute-0 sudo[142750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:09 compute-0 sudo[142750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:09 compute-0 sudo[142750]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:09 compute-0 sudo[142775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:34:09 compute-0 sudo[142775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:09 compute-0 sudo[142775]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:09 compute-0 sudo[142800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:10 compute-0 sudo[142800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:10 compute-0 sudo[142800]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:10 compute-0 sudo[142825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:34:10 compute-0 sudo[142825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.420281971 +0000 UTC m=+0.045553818 container create 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:34:10 compute-0 systemd[1]: Started libpod-conmon-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope.
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.401185439 +0000 UTC m=+0.026457306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.519334146 +0000 UTC m=+0.144606063 container init 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.530164501 +0000 UTC m=+0.155436368 container start 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.534563727 +0000 UTC m=+0.159835634 container attach 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:34:10 compute-0 funny_jemison[142906]: 167 167
Nov 22 05:34:10 compute-0 systemd[1]: libpod-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope: Deactivated successfully.
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.538660884 +0000 UTC m=+0.163932721 container died 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-78f4f62d3c3adfbcd5ca309070f9cc909e7c51066d49a07e9042e2975329559f-merged.mount: Deactivated successfully.
Nov 22 05:34:10 compute-0 podman[142890]: 2025-11-22 05:34:10.678813589 +0000 UTC m=+0.304085456 container remove 0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jemison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:34:10 compute-0 systemd[1]: libpod-conmon-0c0d9491fd3fef614ca9a599b94cee5f1f55e2c7c8e38e00150e58ea127d8768.scope: Deactivated successfully.
Nov 22 05:34:10 compute-0 podman[142930]: 2025-11-22 05:34:10.923937134 +0000 UTC m=+0.070568387 container create 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:34:10 compute-0 systemd[1]: Started libpod-conmon-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope.
Nov 22 05:34:10 compute-0 podman[142930]: 2025-11-22 05:34:10.897077348 +0000 UTC m=+0.043708651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:11 compute-0 podman[142930]: 2025-11-22 05:34:11.044069723 +0000 UTC m=+0.190701026 container init 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:34:11 compute-0 podman[142930]: 2025-11-22 05:34:11.053571242 +0000 UTC m=+0.200202465 container start 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:34:11 compute-0 podman[142930]: 2025-11-22 05:34:11.057028474 +0000 UTC m=+0.203659727 container attach 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:34:11 compute-0 ceph-mon[75840]: pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]: {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     "0": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "devices": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "/dev/loop3"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             ],
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_name": "ceph_lv0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_size": "21470642176",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "name": "ceph_lv0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "tags": {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_name": "ceph",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.crush_device_class": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.encrypted": "0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_id": "0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.vdo": "0"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             },
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "vg_name": "ceph_vg0"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         }
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     ],
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     "1": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "devices": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "/dev/loop4"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             ],
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_name": "ceph_lv1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_size": "21470642176",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "name": "ceph_lv1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "tags": {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_name": "ceph",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.crush_device_class": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.encrypted": "0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_id": "1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.vdo": "0"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             },
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "vg_name": "ceph_vg1"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         }
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     ],
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     "2": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "devices": [
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "/dev/loop5"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             ],
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_name": "ceph_lv2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_size": "21470642176",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "name": "ceph_lv2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "tags": {
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.cluster_name": "ceph",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.crush_device_class": "",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.encrypted": "0",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osd_id": "2",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:                 "ceph.vdo": "0"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             },
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "type": "block",
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:             "vg_name": "ceph_vg2"
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:         }
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]:     ]
Nov 22 05:34:11 compute-0 exciting_khayyam[142946]: }
Nov 22 05:34:11 compute-0 systemd[1]: libpod-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope: Deactivated successfully.
Nov 22 05:34:11 compute-0 podman[142930]: 2025-11-22 05:34:11.809049897 +0000 UTC m=+0.955681120 container died 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e422d781c63eb9046fdefc34032a24e2ed8f12de7d9dcdaee0e9e9f8ef2d09c-merged.mount: Deactivated successfully.
Nov 22 05:34:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:11 compute-0 podman[142930]: 2025-11-22 05:34:11.873641455 +0000 UTC m=+1.020272708 container remove 303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:34:11 compute-0 systemd[1]: libpod-conmon-303c44ee42f9c3eabcb8154bdfb2753cbbb9a9b9e6bdee251d649b8f3267aa52.scope: Deactivated successfully.
Nov 22 05:34:11 compute-0 sudo[142825]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:12 compute-0 sudo[142968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:12 compute-0 sudo[142968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:12 compute-0 sudo[142968]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:12 compute-0 sudo[142993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:34:12 compute-0 sudo[142993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:12 compute-0 sudo[142993]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:12 compute-0 sudo[143018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:12 compute-0 sudo[143018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:12 compute-0 sudo[143018]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:12 compute-0 sudo[143043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:34:12 compute-0 sudo[143043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:12 compute-0 ceph-mon[75840]: pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.623728167 +0000 UTC m=+0.051965457 container create 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:34:12 compute-0 systemd[1]: Started libpod-conmon-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope.
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.598034021 +0000 UTC m=+0.026271371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.731009477 +0000 UTC m=+0.159246847 container init 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.740627 +0000 UTC m=+0.168864301 container start 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:34:12 compute-0 fervent_blackburn[143123]: 167 167
Nov 22 05:34:12 compute-0 systemd[1]: libpod-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope: Deactivated successfully.
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.744682037 +0000 UTC m=+0.172919337 container attach 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.74817604 +0000 UTC m=+0.176413330 container died 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:34:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0ae79c65c536224c233ed8ce635d2603d7739a382064ec50c1594cdbfe1da92-merged.mount: Deactivated successfully.
Nov 22 05:34:12 compute-0 podman[143107]: 2025-11-22 05:34:12.80334637 +0000 UTC m=+0.231583650 container remove 3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:34:12 compute-0 systemd[1]: libpod-conmon-3d3eb7c2ac728240cdfdd956d45394add0249b3c003a8d81c72394e662cdfbe5.scope: Deactivated successfully.
Nov 22 05:34:12 compute-0 sshd-session[143127]: Accepted publickey for zuul from 192.168.122.30 port 43104 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:34:12 compute-0 systemd-logind[798]: New session 45 of user zuul.
Nov 22 05:34:12 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 22 05:34:12 compute-0 sshd-session[143127]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:34:13 compute-0 podman[143150]: 2025-11-22 05:34:13.033286585 +0000 UTC m=+0.070905874 container create 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 05:34:13 compute-0 podman[143150]: 2025-11-22 05:34:12.999905349 +0000 UTC m=+0.037524698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:34:13 compute-0 systemd[1]: Started libpod-conmon-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope.
Nov 22 05:34:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:34:13 compute-0 podman[143150]: 2025-11-22 05:34:13.152186682 +0000 UTC m=+0.189805961 container init 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:34:13 compute-0 podman[143150]: 2025-11-22 05:34:13.16540515 +0000 UTC m=+0.203024399 container start 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:34:13 compute-0 podman[143150]: 2025-11-22 05:34:13.168522922 +0000 UTC m=+0.206142171 container attach 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:13 compute-0 python3.9[143319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:34:14 compute-0 sharp_mclean[143217]: {
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_id": 1,
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "type": "bluestore"
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     },
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_id": 2,
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "type": "bluestore"
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     },
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_id": 0,
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:         "type": "bluestore"
Nov 22 05:34:14 compute-0 sharp_mclean[143217]:     }
Nov 22 05:34:14 compute-0 sharp_mclean[143217]: }
Nov 22 05:34:14 compute-0 systemd[1]: libpod-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Deactivated successfully.
Nov 22 05:34:14 compute-0 systemd[1]: libpod-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Consumed 1.091s CPU time.
Nov 22 05:34:14 compute-0 podman[143150]: 2025-11-22 05:34:14.252981125 +0000 UTC m=+1.290600434 container died 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd9b2aa52cf7e80523972984e0bb61aff252eecc96146b0406ba82b6b7605198-merged.mount: Deactivated successfully.
Nov 22 05:34:14 compute-0 podman[143150]: 2025-11-22 05:34:14.321100656 +0000 UTC m=+1.358719915 container remove 1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:34:14 compute-0 systemd[1]: libpod-conmon-1dae5590305cee07c9fb7295480a05493111f53ff46b5e8d856aa38296abe705.scope: Deactivated successfully.
Nov 22 05:34:14 compute-0 sudo[143043]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:34:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:34:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f0f99895-8adc-4253-bf43-beee7c9bcc63 does not exist
Nov 22 05:34:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d5c40b52-0e08-432a-b36e-a33173f3ace1 does not exist
Nov 22 05:34:14 compute-0 sudo[143387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:34:14 compute-0 sudo[143387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:14 compute-0 sudo[143387]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:14 compute-0 sudo[143413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:34:14 compute-0 sudo[143413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:34:14 compute-0 sudo[143413]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:15 compute-0 sudo[143562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxqgouawzahhrpiysqibpkwpiqwidpqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789654.5496547-34-90594416077488/AnsiballZ_file.py'
Nov 22 05:34:15 compute-0 sudo[143562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:15 compute-0 python3.9[143564]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:34:15 compute-0 sudo[143562]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:15 compute-0 ceph-mon[75840]: pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:34:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:15 compute-0 sudo[143714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hixlkjywxtnkrnphbkblsvnmygoxzgxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789655.550106-34-267256623136773/AnsiballZ_file.py'
Nov 22 05:34:15 compute-0 sudo[143714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:16 compute-0 python3.9[143716]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:34:16 compute-0 sudo[143714]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:17 compute-0 python3.9[143866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:34:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:17 compute-0 ceph-mon[75840]: pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:17 compute-0 sshd-session[143867]: Invalid user bootstrap from 80.94.92.166 port 51282
Nov 22 05:34:17 compute-0 sshd-session[143867]: Connection closed by invalid user bootstrap 80.94.92.166 port 51282 [preauth]
Nov 22 05:34:17 compute-0 sudo[144018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lneoqalyutjrxeeydgpnczyuxdinjlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789657.2684553-57-239603764766667/AnsiballZ_seboolean.py'
Nov 22 05:34:17 compute-0 sudo[144018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:18 compute-0 python3.9[144020]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 05:34:19 compute-0 sudo[144018]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:19 compute-0 ceph-mon[75840]: pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:20 compute-0 sudo[144174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smuzguuxlsyiwphryynkhemtocddwnxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789659.6686563-67-30857866160512/AnsiballZ_setup.py'
Nov 22 05:34:20 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 22 05:34:20 compute-0 sudo[144174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:20 compute-0 python3.9[144176]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:34:20 compute-0 sudo[144174]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:21 compute-0 sudo[144258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujrgnqjfavlfroyoltlgkrjzlrvphix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789659.6686563-67-30857866160512/AnsiballZ_dnf.py'
Nov 22 05:34:21 compute-0 sudo[144258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:21 compute-0 python3.9[144260]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:34:21 compute-0 ceph-mon[75840]: pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.469551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661469745, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 602, "num_deletes": 251, "total_data_size": 678484, "memory_usage": 689576, "flush_reason": "Manual Compaction"}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661475856, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 672652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9125, "largest_seqno": 9726, "table_properties": {"data_size": 669384, "index_size": 1176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7228, "raw_average_key_size": 18, "raw_value_size": 662863, "raw_average_value_size": 1690, "num_data_blocks": 54, "num_entries": 392, "num_filter_entries": 392, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789617, "oldest_key_time": 1763789617, "file_creation_time": 1763789661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6381 microseconds, and 2806 cpu microseconds.
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.475943) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 672652 bytes OK
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.475994) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478199) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478214) EVENT_LOG_v1 {"time_micros": 1763789661478209, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 675195, prev total WAL file size 675195, number of live WAL files 2.
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.479029) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(656KB)], [23(6739KB)]
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661479105, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7574174, "oldest_snapshot_seqno": -1}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3304 keys, 6046751 bytes, temperature: kUnknown
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661514628, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6046751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6022576, "index_size": 14786, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80164, "raw_average_key_size": 24, "raw_value_size": 5960755, "raw_average_value_size": 1804, "num_data_blocks": 645, "num_entries": 3304, "num_filter_entries": 3304, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.514919) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6046751 bytes
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.516870) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 212.7 rd, 169.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(20.2) write-amplify(9.0) OK, records in: 3818, records dropped: 514 output_compression: NoCompression
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.516901) EVENT_LOG_v1 {"time_micros": 1763789661516886, "job": 8, "event": "compaction_finished", "compaction_time_micros": 35611, "compaction_time_cpu_micros": 17365, "output_level": 6, "num_output_files": 1, "total_output_size": 6046751, "num_input_records": 3818, "num_output_records": 3304, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661517243, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789661519644, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.478891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:34:21.519787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:34:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:22 compute-0 sudo[144258]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:23 compute-0 sudo[144411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltvvluijtchsidlcvcjqzczyfxqccgng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789662.7346637-79-247596067608476/AnsiballZ_systemd.py'
Nov 22 05:34:23 compute-0 sudo[144411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:23 compute-0 ceph-mon[75840]: pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:23 compute-0 python3.9[144413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:34:23 compute-0 sudo[144411]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:24 compute-0 sudo[144566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvtrrftxgovzkykbfdwastgbhtpsqipf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789664.0863006-87-154725646300928/AnsiballZ_edpm_nftables_snippet.py'
Nov 22 05:34:24 compute-0 sudo[144566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:24 compute-0 python3[144568]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 22 05:34:24 compute-0 sudo[144566]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:25 compute-0 ceph-mon[75840]: pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:25 compute-0 sudo[144718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwuitpoocmtfsqltrvlfvfaztsbkbnco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789665.1706827-96-126283299667985/AnsiballZ_file.py'
Nov 22 05:34:25 compute-0 sudo[144718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:25 compute-0 python3.9[144720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:25 compute-0 sudo[144718]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:26 compute-0 sudo[144870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tipacihfebyvpxmmpcezkarehtmamywk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789665.9970796-104-227649922444865/AnsiballZ_stat.py'
Nov 22 05:34:26 compute-0 sudo[144870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:26 compute-0 python3.9[144872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:26 compute-0 sudo[144870]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:27 compute-0 sudo[144948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnmwzkvxwxozstftyhjuzlooqhpkxwbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789665.9970796-104-227649922444865/AnsiballZ_file.py'
Nov 22 05:34:27 compute-0 sudo[144948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:27 compute-0 python3.9[144950]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:27 compute-0 sudo[144948]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:27 compute-0 ceph-mon[75840]: pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:27 compute-0 sudo[145100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoycxdlsrlvbpqxghmsjbeuehfgieegs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789667.5096066-116-99384649084124/AnsiballZ_stat.py'
Nov 22 05:34:27 compute-0 sudo[145100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:28 compute-0 python3.9[145102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:28 compute-0 sudo[145100]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:28 compute-0 sudo[145178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-estbnuxkqavalvswnahifsdkljergmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789667.5096066-116-99384649084124/AnsiballZ_file.py'
Nov 22 05:34:28 compute-0 sudo[145178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:28 compute-0 python3.9[145180]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._1vy4_1d recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:28 compute-0 sudo[145178]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:29 compute-0 sudo[145330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hboixlkygjfieyauisxuagfjcswlljct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789668.945982-128-253662244236295/AnsiballZ_stat.py'
Nov 22 05:34:29 compute-0 sudo[145330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:29 compute-0 ceph-mon[75840]: pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:29 compute-0 python3.9[145332]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:29 compute-0 sudo[145330]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:29 compute-0 sudo[145408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmylrvkxlikdncgdcxawfgcahefhcohz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789668.945982-128-253662244236295/AnsiballZ_file.py'
Nov 22 05:34:29 compute-0 sudo[145408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:30 compute-0 python3.9[145410]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:30 compute-0 sudo[145408]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:30 compute-0 sudo[145560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcflvtmpuqjnmsegoxrokugmgjnbkofm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789670.2996979-141-91175893117726/AnsiballZ_command.py'
Nov 22 05:34:30 compute-0 sudo[145560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:31 compute-0 python3.9[145562]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:31 compute-0 sudo[145560]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:31 compute-0 ceph-mon[75840]: pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:31 compute-0 sudo[145713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtgnffofoitmtalcccvvxzanbaosrgez ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789671.2794254-149-194890991703762/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 05:34:31 compute-0 sudo[145713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:31 compute-0 python3[145715]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 05:34:31 compute-0 sudo[145713]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:32 compute-0 sudo[145865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgbhhiifkcfczbdhybxujztzpxvikcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789672.1297882-157-193595960011703/AnsiballZ_stat.py'
Nov 22 05:34:32 compute-0 sudo[145865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:32 compute-0 python3.9[145867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:32 compute-0 sudo[145865]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:33 compute-0 sudo[145990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixuunlfhyfsmmdfiuntjapalnoqjnjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789672.1297882-157-193595960011703/AnsiballZ_copy.py'
Nov 22 05:34:33 compute-0 sudo[145990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:33 compute-0 ceph-mon[75840]: pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:33 compute-0 python3.9[145992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789672.1297882-157-193595960011703/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:33 compute-0 sudo[145990]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:34 compute-0 sudo[146142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngmbakzgvbradeoqetoxkexxwysplqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789673.9106023-172-115541317199094/AnsiballZ_stat.py'
Nov 22 05:34:34 compute-0 sudo[146142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:34 compute-0 python3.9[146144]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:34 compute-0 sudo[146142]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:34 compute-0 sudo[146267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokqphtvozobnnuwjeypisbeegjszcvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789673.9106023-172-115541317199094/AnsiballZ_copy.py'
Nov 22 05:34:34 compute-0 sudo[146267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:35 compute-0 python3.9[146269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789673.9106023-172-115541317199094/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:35 compute-0 sudo[146267]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:35 compute-0 ceph-mon[75840]: pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:35 compute-0 sudo[146419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmziqadviiplrdfzerznyzdhfkwxmhen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789675.206675-187-29367413167044/AnsiballZ_stat.py'
Nov 22 05:34:35 compute-0 sudo[146419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:35 compute-0 python3.9[146421]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:35 compute-0 sudo[146419]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:36 compute-0 sudo[146544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgjkkozadhjnsoepijjtezavjizmsku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789675.206675-187-29367413167044/AnsiballZ_copy.py'
Nov 22 05:34:36 compute-0 sudo[146544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:36 compute-0 python3.9[146546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789675.206675-187-29367413167044/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:36 compute-0 sudo[146544]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:36 compute-0 sudo[146696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlpmmoofjzathyjklgrhhrynflldttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789676.5816927-202-218290269505715/AnsiballZ_stat.py'
Nov 22 05:34:36 compute-0 sudo[146696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:37 compute-0 python3.9[146698]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:37 compute-0 sudo[146696]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:37 compute-0 ceph-mon[75840]: pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:37 compute-0 sudo[146821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srnrhamonoozbimnxwxamfaryhqkloka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789676.5816927-202-218290269505715/AnsiballZ_copy.py'
Nov 22 05:34:37 compute-0 sudo[146821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:37 compute-0 python3.9[146823]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789676.5816927-202-218290269505715/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:37 compute-0 sudo[146821]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:38 compute-0 sudo[146973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdgxxqwtlzlkzvhnhvhzlmqgtduhnccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789677.9419641-217-241713955397323/AnsiballZ_stat.py'
Nov 22 05:34:38 compute-0 sudo[146973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:38 compute-0 python3.9[146975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:38 compute-0 sudo[146973]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:38 compute-0 sudo[147098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-komkzndjbptyrzwjipvoxvxaalvriwvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789677.9419641-217-241713955397323/AnsiballZ_copy.py'
Nov 22 05:34:38 compute-0 sudo[147098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:39 compute-0 python3.9[147100]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763789677.9419641-217-241713955397323/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:39 compute-0 sudo[147098]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:39 compute-0 sudo[147250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eylgiaeqmscymucbnvihtdouubqcimyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789679.1764054-232-242664964153848/AnsiballZ_file.py'
Nov 22 05:34:39 compute-0 sudo[147250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:39 compute-0 ceph-mon[75840]: pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:39 compute-0 python3.9[147252]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:39 compute-0 sudo[147250]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:40 compute-0 sudo[147402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwefhfsvaghpsantlpmtmfnayovmtgft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789679.969558-240-148986203132703/AnsiballZ_command.py'
Nov 22 05:34:40 compute-0 sudo[147402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:40 compute-0 python3.9[147404]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:40 compute-0 sudo[147402]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:41 compute-0 sudo[147557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arpduoqmgewhopppgtrffkyjyufazsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789680.7928798-248-267638780627044/AnsiballZ_blockinfile.py'
Nov 22 05:34:41 compute-0 sudo[147557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:41 compute-0 python3.9[147559]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:41 compute-0 sudo[147557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:41 compute-0 ceph-mon[75840]: pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:41 compute-0 sudo[147709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxiidtrtxraduhwowewkxhxihqnzdjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789681.7376444-257-163284035275798/AnsiballZ_command.py'
Nov 22 05:34:41 compute-0 sudo[147709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:42 compute-0 python3.9[147711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:42 compute-0 sudo[147709]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:42 compute-0 ceph-mon[75840]: pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:42 compute-0 sudo[147862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmjtfwymdxmcfpwhmktymbwrpgvzsfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789682.3888497-265-148270780406294/AnsiballZ_stat.py'
Nov 22 05:34:42 compute-0 sudo[147862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:42 compute-0 python3.9[147864]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:34:42 compute-0 sudo[147862]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:43 compute-0 sudo[148016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdlyvwfaswbvzjdflflihtkxpbvdqutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789683.190153-273-161322255160489/AnsiballZ_command.py'
Nov 22 05:34:43 compute-0 sudo[148016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:34:43
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta']
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:34:43 compute-0 python3.9[148018]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:43 compute-0 sudo[148016]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:34:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:44 compute-0 sudo[148171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqmhnbrycvttokgsjgcuaxovsyohhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789683.9845054-281-59557650493180/AnsiballZ_file.py'
Nov 22 05:34:44 compute-0 sudo[148171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:44 compute-0 python3.9[148173]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:44 compute-0 sudo[148171]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:44 compute-0 ceph-mon[75840]: pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:45 compute-0 python3.9[148323]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:34:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:46 compute-0 sudo[148474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljjzeqmxoffgwiizzuzqtoophaqrolq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789686.170391-321-173894059878941/AnsiballZ_command.py'
Nov 22 05:34:46 compute-0 sudo[148474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:46 compute-0 python3.9[148476]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:46 compute-0 ovs-vsctl[148477]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 22 05:34:46 compute-0 sudo[148474]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:46 compute-0 ceph-mon[75840]: pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:47 compute-0 sudo[148627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muovhrmbdcoybgedjtnnpsnsueqpeyur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789686.9679947-330-249019784703382/AnsiballZ_command.py'
Nov 22 05:34:47 compute-0 sudo[148627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:47 compute-0 python3.9[148629]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:47 compute-0 sudo[148627]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:48 compute-0 sudo[148782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angnkebzgjylvdsstycmqzjuiapofgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789687.7789838-338-82289838124044/AnsiballZ_command.py'
Nov 22 05:34:48 compute-0 sudo[148782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:48 compute-0 python3.9[148784]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:34:48 compute-0 ovs-vsctl[148785]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 22 05:34:48 compute-0 sudo[148782]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:48 compute-0 ceph-mon[75840]: pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:49 compute-0 python3.9[148935]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:34:49 compute-0 sudo[149087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vilcnxheglewtswcavinhluiiqdujkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789689.377099-355-60704045593893/AnsiballZ_file.py'
Nov 22 05:34:49 compute-0 sudo[149087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:49 compute-0 python3.9[149089]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:34:49 compute-0 sudo[149087]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:50 compute-0 sudo[149239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlwmhgsdukttyffxgfkgttxzqfiwebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789690.1058116-363-77832171920080/AnsiballZ_stat.py'
Nov 22 05:34:50 compute-0 sudo[149239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:50 compute-0 python3.9[149241]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:50 compute-0 sudo[149239]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:50 compute-0 sudo[149317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihtmomthvnutbjjvglmiitxpapefksi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789690.1058116-363-77832171920080/AnsiballZ_file.py'
Nov 22 05:34:50 compute-0 sudo[149317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:50 compute-0 ceph-mon[75840]: pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:51 compute-0 python3.9[149319]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:34:51 compute-0 sudo[149317]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:51 compute-0 sudo[149469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisopmazpjjpyaoakjibpheudwfmrzwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789691.2889946-363-210998487412688/AnsiballZ_stat.py'
Nov 22 05:34:51 compute-0 sudo[149469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:51 compute-0 python3.9[149471]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:51 compute-0 sudo[149469]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:52 compute-0 sudo[149547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpyshuvketjkesnxepvehcnzmxzqldon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789691.2889946-363-210998487412688/AnsiballZ_file.py'
Nov 22 05:34:52 compute-0 sudo[149547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:52 compute-0 python3.9[149549]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:34:52 compute-0 sudo[149547]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:52 compute-0 sudo[149699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkocfrvtfrsltpbftozyiondiboygpnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789692.4404616-386-15351498373296/AnsiballZ_file.py'
Nov 22 05:34:52 compute-0 sudo[149699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:34:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:34:52 compute-0 python3.9[149701]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:52 compute-0 sudo[149699]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:52 compute-0 ceph-mon[75840]: pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:53 compute-0 sudo[149851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muexzjoroyvqvqrbhxdnqfqmvacbgbti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789693.0272017-394-147516853342224/AnsiballZ_stat.py'
Nov 22 05:34:53 compute-0 sudo[149851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:53 compute-0 python3.9[149853]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:53 compute-0 sudo[149851]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:53 compute-0 sudo[149929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvcemvsgorztzzyuyluhfbtonimdqtmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789693.0272017-394-147516853342224/AnsiballZ_file.py'
Nov 22 05:34:53 compute-0 sudo[149929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:53 compute-0 python3.9[149931]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:54 compute-0 sudo[149929]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:54 compute-0 sudo[150081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwjlhhhezcpmrupjlzftfggdxqpzfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789694.1872196-406-228211945035919/AnsiballZ_stat.py'
Nov 22 05:34:54 compute-0 sudo[150081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:54 compute-0 python3.9[150083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:54 compute-0 sudo[150081]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:54 compute-0 sudo[150159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmdjefmmghftzmjvungenezbqiclszyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789694.1872196-406-228211945035919/AnsiballZ_file.py'
Nov 22 05:34:54 compute-0 sudo[150159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:54 compute-0 ceph-mon[75840]: pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:55 compute-0 python3.9[150161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:55 compute-0 sudo[150159]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:55 compute-0 sudo[150311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssczdhpxorldxmxomslyvqyrlngfmudt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789695.346003-418-269380864084930/AnsiballZ_systemd.py'
Nov 22 05:34:55 compute-0 sudo[150311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:56 compute-0 python3.9[150313]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:34:56 compute-0 systemd[1]: Reloading.
Nov 22 05:34:56 compute-0 systemd-rc-local-generator[150336]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:34:56 compute-0 systemd-sysv-generator[150342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:34:56 compute-0 sudo[150311]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:56 compute-0 sudo[150500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwtimsllagxnykzifguugunidnihxkod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789696.6563447-426-53176687262541/AnsiballZ_stat.py'
Nov 22 05:34:56 compute-0 sudo[150500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:57 compute-0 ceph-mon[75840]: pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:57 compute-0 python3.9[150502]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:34:57 compute-0 sudo[150500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:57 compute-0 sudo[150578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvpmcajtzbrntomvpsxoxtcqdgudngxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789696.6563447-426-53176687262541/AnsiballZ_file.py'
Nov 22 05:34:57 compute-0 sudo[150578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:57 compute-0 python3.9[150580]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:57 compute-0 sudo[150578]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:58 compute-0 sudo[150730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blajrynwnwwckcnktlbutjbbpajnlzdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789697.796575-438-18048543068865/AnsiballZ_stat.py'
Nov 22 05:34:58 compute-0 sudo[150730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:58 compute-0 python3.9[150732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:34:58 compute-0 sudo[150730]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:58 compute-0 sudo[150808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sojcnydcglsplpbgvjjbtgojthbzicsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789697.796575-438-18048543068865/AnsiballZ_file.py'
Nov 22 05:34:58 compute-0 sudo[150808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:58 compute-0 python3.9[150810]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:34:58 compute-0 sudo[150808]: pam_unix(sudo:session): session closed for user root
Nov 22 05:34:59 compute-0 ceph-mon[75840]: pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:34:59 compute-0 sudo[150960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwbmmyevscdznwgjbugystpqhvjwqbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789699.0235803-450-110178296774537/AnsiballZ_systemd.py'
Nov 22 05:34:59 compute-0 sudo[150960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:34:59 compute-0 python3.9[150962]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:34:59 compute-0 systemd[1]: Reloading.
Nov 22 05:34:59 compute-0 systemd-rc-local-generator[150990]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:34:59 compute-0 systemd-sysv-generator[150993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:34:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:00 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 05:35:00 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 05:35:00 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 05:35:00 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 05:35:00 compute-0 sudo[150960]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:00 compute-0 sudo[151153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkmwcqlllpwdredhyonsgphbtuqfafcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789700.3934417-460-275420151255926/AnsiballZ_file.py'
Nov 22 05:35:00 compute-0 sudo[151153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:00 compute-0 python3.9[151155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:00 compute-0 sudo[151153]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:01 compute-0 ceph-mon[75840]: pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:01 compute-0 sudo[151305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwqslodczdvgslnnmhbqsuehmwpymuzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789701.1731179-468-122598031462066/AnsiballZ_stat.py'
Nov 22 05:35:01 compute-0 sudo[151305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:01 compute-0 python3.9[151307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:01 compute-0 sudo[151305]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:02 compute-0 sudo[151428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkudlbsxsirvldtladqskiopbpgjotse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789701.1731179-468-122598031462066/AnsiballZ_copy.py'
Nov 22 05:35:02 compute-0 sudo[151428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:02 compute-0 python3.9[151430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789701.1731179-468-122598031462066/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:02 compute-0 sudo[151428]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:02 compute-0 sudo[151580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zejqmhsxsyhbtxldzudqqqdmgckmglcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789702.629686-485-212323071859363/AnsiballZ_file.py'
Nov 22 05:35:02 compute-0 sudo[151580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:03 compute-0 ceph-mon[75840]: pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:03 compute-0 python3.9[151582]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:03 compute-0 sudo[151580]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:03 compute-0 sudo[151732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfazdkqpgcyjisxhkhsmezktbvdsapru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789703.3898373-493-33275843032799/AnsiballZ_stat.py'
Nov 22 05:35:03 compute-0 sudo[151732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:03 compute-0 python3.9[151734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:03 compute-0 sudo[151732]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:04 compute-0 sudo[151855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqdtydbfxnziebccoiybksasxdphnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789703.3898373-493-33275843032799/AnsiballZ_copy.py'
Nov 22 05:35:04 compute-0 sudo[151855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:04 compute-0 python3.9[151857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789703.3898373-493-33275843032799/.source.json _original_basename=.1q0p8czq follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:35:04 compute-0 sudo[151855]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:05 compute-0 ceph-mon[75840]: pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:05 compute-0 sudo[152007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdqeoasdfuxogvftowucmrrphmgngjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789704.702074-508-129701765216602/AnsiballZ_file.py'
Nov 22 05:35:05 compute-0 sudo[152007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:05 compute-0 python3.9[152009]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:35:05 compute-0 sudo[152007]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:05 compute-0 sudo[152159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryauflvpibhvbuyvaynmqgrpilcsfjso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789705.5894382-516-128586460749803/AnsiballZ_stat.py'
Nov 22 05:35:05 compute-0 sudo[152159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:06 compute-0 sudo[152159]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:06 compute-0 sudo[152282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkyoddydlztfefswyakzouqkttetelby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789705.5894382-516-128586460749803/AnsiballZ_copy.py'
Nov 22 05:35:06 compute-0 sudo[152282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:06 compute-0 sudo[152282]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:07 compute-0 ceph-mon[75840]: pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:07 compute-0 sudo[152434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cerybzwublyduzcfcarrgfkoxucajael ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789707.1297774-533-218229052508370/AnsiballZ_container_config_data.py'
Nov 22 05:35:07 compute-0 sudo[152434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:07 compute-0 python3.9[152436]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 22 05:35:07 compute-0 sudo[152434]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:08 compute-0 sudo[152586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzjvvdtqcfkkzyftjsaaskzfspkqhgms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789708.0741613-542-276700213197420/AnsiballZ_container_config_hash.py'
Nov 22 05:35:08 compute-0 sudo[152586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:08 compute-0 python3.9[152588]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 05:35:08 compute-0 sudo[152586]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:09 compute-0 ceph-mon[75840]: pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:09 compute-0 sudo[152738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbkvrnmzaqyauyxewejxlxojhtoqrqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789709.2042167-551-148755089958947/AnsiballZ_podman_container_info.py'
Nov 22 05:35:09 compute-0 sudo[152738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:09 compute-0 python3.9[152740]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 05:35:10 compute-0 sudo[152738]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:11 compute-0 ceph-mon[75840]: pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:11 compute-0 sudo[152917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjeszfxgspenxgmvsboidhzerqybjyw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789710.956211-564-30689943532681/AnsiballZ_edpm_container_manage.py'
Nov 22 05:35:11 compute-0 sudo[152917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:11 compute-0 python3[152919]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 05:35:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:13 compute-0 ceph-mon[75840]: pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:14 compute-0 sudo[152984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:14 compute-0 sudo[152984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:14 compute-0 sudo[152984]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:14 compute-0 sudo[153009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:35:14 compute-0 sudo[153009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:14 compute-0 sudo[153009]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:14 compute-0 sudo[153034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:14 compute-0 sudo[153034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:14 compute-0 sudo[153034]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:14 compute-0 sudo[153059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 05:35:14 compute-0 sudo[153059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:15 compute-0 ceph-mon[75840]: pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:15 compute-0 sudo[153059]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:35:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:35:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:16 compute-0 sudo[153135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:16 compute-0 sudo[153135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:16 compute-0 sudo[153135]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:16 compute-0 sudo[153160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:35:16 compute-0 sudo[153160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:16 compute-0 sudo[153160]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:16 compute-0 sudo[153185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:16 compute-0 sudo[153185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:16 compute-0 sudo[153185]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:16 compute-0 sudo[153210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:35:16 compute-0 sudo[153210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:16 compute-0 ceph-mon[75840]: pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:17 compute-0 podman[152934]: 2025-11-22 05:35:17.200198032 +0000 UTC m=+5.343958246 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 05:35:17 compute-0 podman[153290]: 2025-11-22 05:35:17.389892654 +0000 UTC m=+0.064909039 container create 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Nov 22 05:35:17 compute-0 podman[153290]: 2025-11-22 05:35:17.360054863 +0000 UTC m=+0.035071238 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 05:35:17 compute-0 python3[152919]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 05:35:17 compute-0 sudo[153210]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:17 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c3ac82fb-6974-43b1-88f2-31a36f3f340e does not exist
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6cfcf25a-8676-46e2-9369-56ea72ea45f8 does not exist
Nov 22 05:35:17 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f478eb82-0d23-4284-bb13-43f0659ba8ce does not exist
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:35:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:35:17 compute-0 sudo[152917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:17 compute-0 sudo[153343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:17 compute-0 sudo[153343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:17 compute-0 sudo[153343]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:17 compute-0 sudo[153389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:35:17 compute-0 sudo[153389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:17 compute-0 sudo[153389]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:17 compute-0 sudo[153417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:17 compute-0 sudo[153417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:17 compute-0 sudo[153417]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:17 compute-0 sudo[153465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:35:17 compute-0 sudo[153465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:35:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:35:18 compute-0 sudo[153620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyncvtaumwjwdodwzlwwduyzeeicnes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789717.8330615-572-157648672679910/AnsiballZ_stat.py'
Nov 22 05:35:18 compute-0 sudo[153620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.307105463 +0000 UTC m=+0.043827222 container create ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:35:18 compute-0 systemd[1]: Started libpod-conmon-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope.
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.287311602 +0000 UTC m=+0.024033391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:18 compute-0 python3.9[153631]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.409821413 +0000 UTC m=+0.146543162 container init ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.421776426 +0000 UTC m=+0.158498405 container start ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:35:18 compute-0 awesome_murdock[153651]: 167 167
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.42841907 +0000 UTC m=+0.165140839 container attach ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:35:18 compute-0 systemd[1]: libpod-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope: Deactivated successfully.
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.429775518 +0000 UTC m=+0.166497267 container died ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:35:18 compute-0 sudo[153620]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c14c4e294cb987b012e8b0d2947abdbfd77fc3d620c98e584ac0576169412ed-merged.mount: Deactivated successfully.
Nov 22 05:35:18 compute-0 podman[153634]: 2025-11-22 05:35:18.476965532 +0000 UTC m=+0.213687311 container remove ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:35:18 compute-0 systemd[1]: libpod-conmon-ac4ae7869bc9350851733c355919803c4d1ac6e178bcf65639cf3838e01acaf2.scope: Deactivated successfully.
Nov 22 05:35:18 compute-0 podman[153700]: 2025-11-22 05:35:18.622734511 +0000 UTC m=+0.041126456 container create 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:35:18 compute-0 systemd[1]: Started libpod-conmon-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope.
Nov 22 05:35:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:18 compute-0 podman[153700]: 2025-11-22 05:35:18.607908039 +0000 UTC m=+0.026300014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:18 compute-0 podman[153700]: 2025-11-22 05:35:18.716655406 +0000 UTC m=+0.135047421 container init 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:35:18 compute-0 podman[153700]: 2025-11-22 05:35:18.728153356 +0000 UTC m=+0.146545341 container start 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:35:18 compute-0 podman[153700]: 2025-11-22 05:35:18.732174188 +0000 UTC m=+0.150566173 container attach 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:35:19 compute-0 sudo[153846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjbuqwzykehzvlunesicmxmglufytho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789718.7924185-581-216782755243326/AnsiballZ_file.py'
Nov 22 05:35:19 compute-0 sudo[153846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:19 compute-0 python3.9[153848]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:35:19 compute-0 sudo[153846]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:19 compute-0 ceph-mon[75840]: pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:19 compute-0 sudo[153940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drqxhsnenaweznwyimoprwubzkcgtcww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789718.7924185-581-216782755243326/AnsiballZ_stat.py'
Nov 22 05:35:19 compute-0 sudo[153940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:19 compute-0 nice_euclid[153716]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:35:19 compute-0 nice_euclid[153716]: --> relative data size: 1.0
Nov 22 05:35:19 compute-0 nice_euclid[153716]: --> All data devices are unavailable
Nov 22 05:35:19 compute-0 systemd[1]: libpod-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Deactivated successfully.
Nov 22 05:35:19 compute-0 podman[153700]: 2025-11-22 05:35:19.857505071 +0000 UTC m=+1.275897026 container died 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:35:19 compute-0 systemd[1]: libpod-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Consumed 1.072s CPU time.
Nov 22 05:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-004489eb22e7149ad1d8c978b8729e5b7226fc1b457ed6c4d4e8cf24c96943a0-merged.mount: Deactivated successfully.
Nov 22 05:35:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:19 compute-0 podman[153700]: 2025-11-22 05:35:19.92892144 +0000 UTC m=+1.347313395 container remove 762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:35:19 compute-0 systemd[1]: libpod-conmon-762bf82396f2fc530c03c17f56299ba3eb045a9841336d74b4bcbd0765ee4c69.scope: Deactivated successfully.
Nov 22 05:35:19 compute-0 python3.9[153944]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:35:19 compute-0 sudo[153940]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:19 compute-0 sudo[153465]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:20 compute-0 sudo[153960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:20 compute-0 sudo[153960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:20 compute-0 sudo[153960]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:20 compute-0 sudo[154008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:35:20 compute-0 sudo[154008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:20 compute-0 sudo[154008]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:20 compute-0 sudo[154062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:20 compute-0 sudo[154062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:20 compute-0 sudo[154062]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:20 compute-0 sudo[154087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:35:20 compute-0 sudo[154087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:20 compute-0 sudo[154246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfekjhpehuyagowzwpvtptynzqkfcvil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789720.0426662-581-96350675462284/AnsiballZ_copy.py'
Nov 22 05:35:20 compute-0 sudo[154246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.651042117 +0000 UTC m=+0.058403308 container create 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:35:20 compute-0 systemd[1]: Started libpod-conmon-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope.
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.623317264 +0000 UTC m=+0.030678505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:20 compute-0 python3.9[154251]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789720.0426662-581-96350675462284/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:35:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.749838847 +0000 UTC m=+0.157200068 container init 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:35:20 compute-0 sudo[154246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.761213154 +0000 UTC m=+0.168574345 container start 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.766025198 +0000 UTC m=+0.173386389 container attach 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:35:20 compute-0 elegant_jones[154268]: 167 167
Nov 22 05:35:20 compute-0 systemd[1]: libpod-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope: Deactivated successfully.
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.769417712 +0000 UTC m=+0.176778903 container died 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:35:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7e73b4f28471de86e2b74b2c72052fc96cdc1e7e3106bdaee7cf3776f7444a-merged.mount: Deactivated successfully.
Nov 22 05:35:20 compute-0 podman[154252]: 2025-11-22 05:35:20.823764646 +0000 UTC m=+0.231125837 container remove 462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jones, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:35:20 compute-0 systemd[1]: libpod-conmon-462b8df5f99c80b49e687a21eef85b1a4bab9ec3d19f05d873d77d25cd86e223.scope: Deactivated successfully.
Nov 22 05:35:21 compute-0 podman[154334]: 2025-11-22 05:35:21.022032866 +0000 UTC m=+0.055138376 container create ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:35:21 compute-0 systemd[1]: Started libpod-conmon-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope.
Nov 22 05:35:21 compute-0 sudo[154377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhftrwvdddgpnyiszwbumdibztrgkism ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789720.0426662-581-96350675462284/AnsiballZ_systemd.py'
Nov 22 05:35:21 compute-0 sudo[154377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:21 compute-0 podman[154334]: 2025-11-22 05:35:21.00276092 +0000 UTC m=+0.035866430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:21 compute-0 podman[154334]: 2025-11-22 05:35:21.127548464 +0000 UTC m=+0.160654004 container init ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:35:21 compute-0 podman[154334]: 2025-11-22 05:35:21.140855495 +0000 UTC m=+0.173961025 container start ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:35:21 compute-0 podman[154334]: 2025-11-22 05:35:21.148795925 +0000 UTC m=+0.181901445 container attach ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:35:21 compute-0 python3.9[154383]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:35:21 compute-0 systemd[1]: Reloading.
Nov 22 05:35:21 compute-0 systemd-rc-local-generator[154411]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:35:21 compute-0 systemd-sysv-generator[154414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:35:21 compute-0 ceph-mon[75840]: pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:21 compute-0 sudo[154377]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:21 compute-0 sudo[154500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnxosqlssihaohialuasyqychohtokbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789720.0426662-581-96350675462284/AnsiballZ_systemd.py'
Nov 22 05:35:21 compute-0 sudo[154500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:21 compute-0 stoic_hermann[154381]: {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     "0": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "devices": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "/dev/loop3"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             ],
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_name": "ceph_lv0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_size": "21470642176",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "name": "ceph_lv0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "tags": {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_name": "ceph",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.crush_device_class": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.encrypted": "0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_id": "0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.vdo": "0"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             },
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "vg_name": "ceph_vg0"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         }
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     ],
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     "1": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "devices": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "/dev/loop4"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             ],
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_name": "ceph_lv1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_size": "21470642176",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "name": "ceph_lv1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "tags": {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_name": "ceph",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.crush_device_class": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.encrypted": "0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_id": "1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.vdo": "0"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             },
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "vg_name": "ceph_vg1"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         }
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     ],
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     "2": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "devices": [
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "/dev/loop5"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             ],
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_name": "ceph_lv2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_size": "21470642176",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "name": "ceph_lv2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "tags": {
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.cluster_name": "ceph",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.crush_device_class": "",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.encrypted": "0",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osd_id": "2",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:                 "ceph.vdo": "0"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             },
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "type": "block",
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:             "vg_name": "ceph_vg2"
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:         }
Nov 22 05:35:21 compute-0 stoic_hermann[154381]:     ]
Nov 22 05:35:21 compute-0 stoic_hermann[154381]: }
Nov 22 05:35:22 compute-0 systemd[1]: libpod-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope: Deactivated successfully.
Nov 22 05:35:22 compute-0 podman[154334]: 2025-11-22 05:35:22.03065564 +0000 UTC m=+1.063761220 container died ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:35:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc80c7ced01ebf917dc09cc0b00a5f546e901dae1ab19ff2885ee18b6b5b6982-merged.mount: Deactivated successfully.
Nov 22 05:35:22 compute-0 podman[154334]: 2025-11-22 05:35:22.108644772 +0000 UTC m=+1.141750302 container remove ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:35:22 compute-0 systemd[1]: libpod-conmon-ba994f992b6d2b278e53376d263f5c8177d6503f737fcde84c348ecb02936cdd.scope: Deactivated successfully.
Nov 22 05:35:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:22 compute-0 sudo[154087]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:22 compute-0 sudo[154516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:22 compute-0 sudo[154516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:22 compute-0 sudo[154516]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:22 compute-0 sudo[154541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:35:22 compute-0 sudo[154541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:22 compute-0 sudo[154541]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:22 compute-0 python3.9[154502]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:35:22 compute-0 sudo[154566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:22 compute-0 sudo[154566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:22 compute-0 sudo[154566]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:22 compute-0 systemd[1]: Reloading.
Nov 22 05:35:22 compute-0 systemd-rc-local-generator[154644]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:35:22 compute-0 systemd-sysv-generator[154647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:35:22 compute-0 ceph-mon[75840]: pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:22 compute-0 sudo[154592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:35:22 compute-0 sudo[154592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:22 compute-0 systemd[1]: Starting ovn_controller container...
Nov 22 05:35:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/139338a45710c654aec62114b17db3a4fe19b4baee80acf159f7b3dd93a9a697/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736.
Nov 22 05:35:22 compute-0 podman[154656]: 2025-11-22 05:35:22.879461893 +0000 UTC m=+0.157044423 container init 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:35:22 compute-0 ovn_controller[154671]: + sudo -E kolla_set_configs
Nov 22 05:35:22 compute-0 podman[154656]: 2025-11-22 05:35:22.925348541 +0000 UTC m=+0.202931031 container start 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 22 05:35:22 compute-0 edpm-start-podman-container[154656]: ovn_controller
Nov 22 05:35:22 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 22 05:35:22 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 22 05:35:23 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 22 05:35:23 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 22 05:35:23 compute-0 podman[154700]: 2025-11-22 05:35:23.02980012 +0000 UTC m=+0.087878618 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 05:35:23 compute-0 systemd[154742]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 22 05:35:23 compute-0 edpm-start-podman-container[154655]: Creating additional drop-in dependency for "ovn_controller" (0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736)
Nov 22 05:35:23 compute-0 systemd[1]: 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736-3df806f0648129ff.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 05:35:23 compute-0 systemd[1]: 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736-3df806f0648129ff.service: Failed with result 'exit-code'.
Nov 22 05:35:23 compute-0 systemd[1]: Reloading.
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.130279467 +0000 UTC m=+0.068899559 container create 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:35:23 compute-0 systemd-rc-local-generator[154809]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:35:23 compute-0 systemd[154742]: Queued start job for default target Main User Target.
Nov 22 05:35:23 compute-0 systemd-sysv-generator[154814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.105911739 +0000 UTC m=+0.044531841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:23 compute-0 systemd[154742]: Created slice User Application Slice.
Nov 22 05:35:23 compute-0 systemd[154742]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 22 05:35:23 compute-0 systemd[154742]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 05:35:23 compute-0 systemd[154742]: Reached target Paths.
Nov 22 05:35:23 compute-0 systemd[154742]: Reached target Timers.
Nov 22 05:35:23 compute-0 systemd[154742]: Starting D-Bus User Message Bus Socket...
Nov 22 05:35:23 compute-0 systemd[154742]: Starting Create User's Volatile Files and Directories...
Nov 22 05:35:23 compute-0 systemd[154742]: Finished Create User's Volatile Files and Directories.
Nov 22 05:35:23 compute-0 systemd[154742]: Listening on D-Bus User Message Bus Socket.
Nov 22 05:35:23 compute-0 systemd[154742]: Reached target Sockets.
Nov 22 05:35:23 compute-0 systemd[154742]: Reached target Basic System.
Nov 22 05:35:23 compute-0 systemd[154742]: Reached target Main User Target.
Nov 22 05:35:23 compute-0 systemd[154742]: Startup finished in 168ms.
Nov 22 05:35:23 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 22 05:35:23 compute-0 systemd[1]: Started ovn_controller container.
Nov 22 05:35:23 compute-0 systemd[1]: Started libpod-conmon-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope.
Nov 22 05:35:23 compute-0 systemd[1]: Started Session c1 of User root.
Nov 22 05:35:23 compute-0 sudo[154500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.449540617 +0000 UTC m=+0.388160709 container init 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.458799105 +0000 UTC m=+0.397419167 container start 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.462039314 +0000 UTC m=+0.400659416 container attach 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:35:23 compute-0 ecstatic_hodgkin[154821]: 167 167
Nov 22 05:35:23 compute-0 systemd[1]: libpod-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope: Deactivated successfully.
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.465898552 +0000 UTC m=+0.404518614 container died 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-23d419523812b08647e78574c2473d0c0a6fb05bf1db1a3b1b27510769109ac5-merged.mount: Deactivated successfully.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:35:23 compute-0 ovn_controller[154671]: INFO:__main__:Validating config file
Nov 22 05:35:23 compute-0 ovn_controller[154671]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:35:23 compute-0 ovn_controller[154671]: INFO:__main__:Writing out command to execute
Nov 22 05:35:23 compute-0 podman[154754]: 2025-11-22 05:35:23.502311456 +0000 UTC m=+0.440931518 container remove 1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:35:23 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: ++ cat /run_command
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + ARGS=
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + sudo kolla_copy_cacerts
Nov 22 05:35:23 compute-0 systemd[1]: libpod-conmon-1c4359111aab9556d6bae415a6ddbcf74b739bc21033af23d59a912a49ab5d82.scope: Deactivated successfully.
Nov 22 05:35:23 compute-0 systemd[1]: Started Session c2 of User root.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + [[ ! -n '' ]]
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + . kolla_extend_start
Nov 22 05:35:23 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + umask 0022
Nov 22 05:35:23 compute-0 ovn_controller[154671]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.5915] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.5922] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.5934] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.5941] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.5945] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 05:35:23 compute-0 kernel: br-int: entered promiscuous mode
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 05:35:23 compute-0 ovn_controller[154671]: 2025-11-22T05:35:23Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.6251] manager: (ovn-4b7cc9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 22 05:35:23 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.6449] device (genev_sys_6081): carrier: link connected
Nov 22 05:35:23 compute-0 NetworkManager[49751]: <info>  [1763789723.6454] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 22 05:35:23 compute-0 systemd-udevd[154904]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 05:35:23 compute-0 systemd-udevd[154905]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 05:35:23 compute-0 podman[154908]: 2025-11-22 05:35:23.711349296 +0000 UTC m=+0.052396910 container create e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 05:35:23 compute-0 systemd[1]: Started libpod-conmon-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope.
Nov 22 05:35:23 compute-0 podman[154908]: 2025-11-22 05:35:23.68453839 +0000 UTC m=+0.025586084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:35:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:35:23 compute-0 podman[154908]: 2025-11-22 05:35:23.813446339 +0000 UTC m=+0.154493953 container init e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:35:23 compute-0 podman[154908]: 2025-11-22 05:35:23.822747428 +0000 UTC m=+0.163795032 container start e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:35:23 compute-0 podman[154908]: 2025-11-22 05:35:23.826350549 +0000 UTC m=+0.167398213 container attach e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:35:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:23 compute-0 sudo[155034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttgicyujdtnwqblppqivbxhlxpmgeena ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789723.6640143-609-57220867736460/AnsiballZ_command.py'
Nov 22 05:35:23 compute-0 sudo[155034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:24 compute-0 python3.9[155036]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:35:24 compute-0 ovs-vsctl[155037]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 22 05:35:24 compute-0 sudo[155034]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:24 compute-0 sudo[155207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amwhbmcvotnxrbdtyuaovfboqhdqwtpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789724.4014242-617-50754237110863/AnsiballZ_command.py'
Nov 22 05:35:24 compute-0 sudo[155207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:24 compute-0 friendly_jackson[154956]: {
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_id": 1,
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "type": "bluestore"
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     },
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_id": 2,
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "type": "bluestore"
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     },
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_id": 0,
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:         "type": "bluestore"
Nov 22 05:35:24 compute-0 friendly_jackson[154956]:     }
Nov 22 05:35:24 compute-0 friendly_jackson[154956]: }
Nov 22 05:35:24 compute-0 systemd[1]: libpod-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Deactivated successfully.
Nov 22 05:35:24 compute-0 systemd[1]: libpod-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Consumed 1.110s CPU time.
Nov 22 05:35:24 compute-0 podman[154908]: 2025-11-22 05:35:24.933870506 +0000 UTC m=+1.274918170 container died e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:35:24 compute-0 ceph-mon[75840]: pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-301d985d74fb3ad9bf5b504d51226068ef24ea467c58230ec0ffbce92a501467-merged.mount: Deactivated successfully.
Nov 22 05:35:25 compute-0 podman[154908]: 2025-11-22 05:35:25.029991342 +0000 UTC m=+1.371038986 container remove e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:35:25 compute-0 python3.9[155211]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:35:25 compute-0 systemd[1]: libpod-conmon-e0f2cbca0eff92d0c70429c915975425bb64ae3fb4dcdd006120222efe384fb4.scope: Deactivated successfully.
Nov 22 05:35:25 compute-0 ovs-vsctl[155233]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 22 05:35:25 compute-0 sudo[154592]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:25 compute-0 sudo[155207]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:35:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:35:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e92560b4-9bae-469c-a50f-cc40de643a59 does not exist
Nov 22 05:35:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ddf92920-6fd5-47a4-8cc9-cb95b2a4c989 does not exist
Nov 22 05:35:25 compute-0 sudo[155236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:35:25 compute-0 sudo[155236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:25 compute-0 sudo[155236]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:25 compute-0 sudo[155284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:35:25 compute-0 sudo[155284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:35:25 compute-0 sudo[155284]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:25 compute-0 sudo[155434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prtvwwshnhyfagrpmssrjpwobvburlav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789725.5883648-631-216269116278015/AnsiballZ_command.py'
Nov 22 05:35:25 compute-0 sudo[155434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:35:26 compute-0 python3.9[155436]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:35:26 compute-0 ovs-vsctl[155437]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 22 05:35:26 compute-0 sudo[155434]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:26 compute-0 sshd-session[143143]: Connection closed by 192.168.122.30 port 43104
Nov 22 05:35:26 compute-0 sshd-session[143127]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:35:26 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 22 05:35:26 compute-0 systemd[1]: session-45.scope: Consumed 1min 4.188s CPU time.
Nov 22 05:35:26 compute-0 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Nov 22 05:35:26 compute-0 systemd-logind[798]: Removed session 45.
Nov 22 05:35:27 compute-0 ceph-mon[75840]: pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:29 compute-0 ceph-mon[75840]: pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:31 compute-0 ceph-mon[75840]: pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:32 compute-0 sshd-session[155462]: Accepted publickey for zuul from 192.168.122.30 port 53028 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:35:32 compute-0 systemd-logind[798]: New session 47 of user zuul.
Nov 22 05:35:32 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 22 05:35:32 compute-0 sshd-session[155462]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:35:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:33 compute-0 ceph-mon[75840]: pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:33 compute-0 python3.9[155615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:35:33 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 22 05:35:33 compute-0 systemd[154742]: Activating special unit Exit the Session...
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped target Main User Target.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped target Basic System.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped target Paths.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped target Sockets.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped target Timers.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 05:35:33 compute-0 systemd[154742]: Closed D-Bus User Message Bus Socket.
Nov 22 05:35:33 compute-0 systemd[154742]: Stopped Create User's Volatile Files and Directories.
Nov 22 05:35:33 compute-0 systemd[154742]: Removed slice User Application Slice.
Nov 22 05:35:33 compute-0 systemd[154742]: Reached target Shutdown.
Nov 22 05:35:33 compute-0 systemd[154742]: Finished Exit the Session.
Nov 22 05:35:33 compute-0 systemd[154742]: Reached target Exit the Session.
Nov 22 05:35:33 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 22 05:35:33 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 22 05:35:33 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 22 05:35:33 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 22 05:35:33 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 22 05:35:33 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 22 05:35:33 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 22 05:35:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:34 compute-0 sudo[155771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sibugydqroicfcxtqupovaodldrrytgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789733.7388616-34-29106380260120/AnsiballZ_file.py'
Nov 22 05:35:34 compute-0 sudo[155771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:34 compute-0 python3.9[155773]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:34 compute-0 sudo[155771]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:34 compute-0 sudo[155923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqcvympacwemrqqfkraxkmrzifmmpvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789734.6292946-34-191875829908699/AnsiballZ_file.py'
Nov 22 05:35:34 compute-0 sudo[155923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:35 compute-0 ceph-mon[75840]: pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:35 compute-0 python3.9[155925]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:35 compute-0 sudo[155923]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:35 compute-0 sudo[156075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvumfbwscvbieorerzvqslbnadmfxyjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789735.3898208-34-81359528194651/AnsiballZ_file.py'
Nov 22 05:35:35 compute-0 sudo[156075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:35 compute-0 python3.9[156077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:35 compute-0 sudo[156075]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:36 compute-0 sudo[156227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxkzaiojzwgeyegzyavpsitwmyxrkoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789736.121852-34-227270650545421/AnsiballZ_file.py'
Nov 22 05:35:36 compute-0 sudo[156227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:36 compute-0 python3.9[156229]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:36 compute-0 sudo[156227]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:36 compute-0 auditd[704]: Audit daemon rotating log files
Nov 22 05:35:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:37 compute-0 ceph-mon[75840]: pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:37 compute-0 sudo[156379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnkfvdhvrujdxhizjqupsdainvwobhgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789736.861411-34-57414351608949/AnsiballZ_file.py'
Nov 22 05:35:37 compute-0 sudo[156379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:37 compute-0 python3.9[156381]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:37 compute-0 sudo[156379]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:38 compute-0 python3.9[156531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:35:39 compute-0 sudo[156681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uubqjamukaxypiuiazxtbqamieyyotln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789738.5579236-78-196037288110224/AnsiballZ_seboolean.py'
Nov 22 05:35:39 compute-0 sudo[156681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:39 compute-0 ceph-mon[75840]: pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:39 compute-0 python3.9[156683]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 05:35:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:39 compute-0 sudo[156681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:40 compute-0 python3.9[156833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:41 compute-0 ceph-mon[75840]: pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:41 compute-0 python3.9[156954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789740.1331282-86-155888685922788/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:35:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 18.51 MB, 0.03 MB/s
                                           Interval WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:35:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:42 compute-0 python3.9[157105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:43 compute-0 python3.9[157226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789741.8837233-101-138408169660805/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:43 compute-0 ceph-mon[75840]: pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:35:43
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'backups', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:35:43 compute-0 sudo[157376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mordaptggjwbfwtowgeoaahkacnfqyfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789743.4506574-118-121668772337470/AnsiballZ_setup.py'
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:35:43 compute-0 sudo[157376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:35:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:44 compute-0 python3.9[157378]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:35:44 compute-0 sudo[157376]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:44 compute-0 sudo[157460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swftrmmxopdtmuoiemeucopfhgesexin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789743.4506574-118-121668772337470/AnsiballZ_dnf.py'
Nov 22 05:35:44 compute-0 sudo[157460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:45 compute-0 ceph-mon[75840]: pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:46 compute-0 python3.9[157462]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:35:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:47 compute-0 sudo[157460]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:35:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 19.67 MB, 0.03 MB/s
                                           Interval WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:35:47 compute-0 ceph-mon[75840]: pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:48 compute-0 sudo[157615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuramcjmvllxppiyiihgmxsgxymeaajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789747.570081-130-150558359930768/AnsiballZ_systemd.py'
Nov 22 05:35:48 compute-0 sudo[157615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:48 compute-0 python3.9[157617]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:35:48 compute-0 sudo[157615]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:49 compute-0 python3.9[157770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:49 compute-0 ceph-mon[75840]: pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:50 compute-0 python3.9[157891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789748.8178408-138-20842925345923/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:50 compute-0 python3.9[158041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:51 compute-0 python3.9[158162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789750.2469435-138-157367254458691/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:51 compute-0 ceph-mon[75840]: pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:52 compute-0 python3.9[158312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:35:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:35:53 compute-0 ovn_controller[154671]: 2025-11-22T05:35:53Z|00025|memory|INFO|16512 kB peak resident set size after 29.6 seconds
Nov 22 05:35:53 compute-0 ovn_controller[154671]: 2025-11-22T05:35:53Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 22 05:35:53 compute-0 podman[158407]: 2025-11-22 05:35:53.208928767 +0000 UTC m=+0.131533073 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 05:35:53 compute-0 python3.9[158446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789752.1447866-182-218181689703308/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:35:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 18.55 MB, 0.03 MB/s
                                           Interval WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:35:53 compute-0 ceph-mon[75840]: pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:54 compute-0 python3.9[158607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:54 compute-0 python3.9[158728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789753.5435038-182-133339693348897/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:55 compute-0 python3.9[158878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:35:55 compute-0 ceph-mon[75840]: pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:56 compute-0 sudo[159030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblvcsmjlyeoskmoplwuzusfpfgwzdaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789755.727423-220-233160241856205/AnsiballZ_file.py'
Nov 22 05:35:56 compute-0 sudo[159030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:56 compute-0 python3.9[159032]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:56 compute-0 sudo[159030]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:56 compute-0 ceph-mon[75840]: pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:56 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 05:35:56 compute-0 sudo[159182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjbsnmmbjlcrecfrdcfysisduahgbvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789756.5957048-228-150312134412392/AnsiballZ_stat.py'
Nov 22 05:35:56 compute-0 sudo[159182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:57 compute-0 python3.9[159184]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:57 compute-0 sudo[159182]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:35:57 compute-0 sudo[159260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uciekkhtpyqrrgrdniyawcvonxvatpdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789756.5957048-228-150312134412392/AnsiballZ_file.py'
Nov 22 05:35:57 compute-0 sudo[159260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:57 compute-0 python3.9[159262]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:57 compute-0 sudo[159260]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:58 compute-0 sudo[159412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piwisnebaqilxmzfndjcbntfremegiqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789757.8241534-228-126280236329702/AnsiballZ_stat.py'
Nov 22 05:35:58 compute-0 sudo[159412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:58 compute-0 python3.9[159414]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:35:58 compute-0 sudo[159412]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:58 compute-0 sudo[159490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhffpsklczxsgmimjzpgbpplciprltvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789757.8241534-228-126280236329702/AnsiballZ_file.py'
Nov 22 05:35:58 compute-0 sudo[159490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:58 compute-0 python3.9[159492]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:35:58 compute-0 sudo[159490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:58 compute-0 ceph-mon[75840]: pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:35:59 compute-0 sudo[159642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfceqyncsxflgymqxgmyzafhciwixxfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789759.1706896-251-159315439361264/AnsiballZ_file.py'
Nov 22 05:35:59 compute-0 sudo[159642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:35:59 compute-0 python3.9[159644]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:35:59 compute-0 sudo[159642]: pam_unix(sudo:session): session closed for user root
Nov 22 05:35:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:00 compute-0 sudo[159794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izsywnfxfqbssksrdswrwgutouoijpor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789759.9906614-259-230572645596546/AnsiballZ_stat.py'
Nov 22 05:36:00 compute-0 sudo[159794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:00 compute-0 python3.9[159796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:00 compute-0 sudo[159794]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:00 compute-0 sudo[159872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aylkbpwgwtuxvjroblidlkpymhtbjywy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789759.9906614-259-230572645596546/AnsiballZ_file.py'
Nov 22 05:36:00 compute-0 sudo[159872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:00 compute-0 ceph-mon[75840]: pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:01 compute-0 python3.9[159874]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:01 compute-0 sudo[159872]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:01 compute-0 sudo[160024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmbdpifvgyqwlfacsthxugojkbccoueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789761.3394344-271-70144766333089/AnsiballZ_stat.py'
Nov 22 05:36:01 compute-0 sudo[160024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:01 compute-0 python3.9[160026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:01 compute-0 sudo[160024]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:02 compute-0 sudo[160102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfevdtcduniwdtrljrregxznvpllmtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789761.3394344-271-70144766333089/AnsiballZ_file.py'
Nov 22 05:36:02 compute-0 sudo[160102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:02 compute-0 python3.9[160104]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:02 compute-0 sudo[160102]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:02 compute-0 ceph-mon[75840]: pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:03 compute-0 sudo[160254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaoitkgpvazsuytjulbbsqlltdycsszn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789762.63587-283-27545488430756/AnsiballZ_systemd.py'
Nov 22 05:36:03 compute-0 sudo[160254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:03 compute-0 python3.9[160256]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:03 compute-0 systemd[1]: Reloading.
Nov 22 05:36:03 compute-0 systemd-rc-local-generator[160280]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:03 compute-0 systemd-sysv-generator[160285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:03 compute-0 sudo[160254]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:04 compute-0 sudo[160442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qutlzstmstqjxhtvaodpbipcbbzrjlwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789763.9828055-291-143885100946614/AnsiballZ_stat.py'
Nov 22 05:36:04 compute-0 sudo[160442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:04 compute-0 python3.9[160444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:04 compute-0 sudo[160442]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:04 compute-0 sudo[160520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfouebgcwjuywedvptfebwtfehmsltfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789763.9828055-291-143885100946614/AnsiballZ_file.py'
Nov 22 05:36:04 compute-0 sudo[160520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:05 compute-0 ceph-mon[75840]: pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:05 compute-0 python3.9[160522]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:05 compute-0 sudo[160520]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:05 compute-0 sudo[160672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnmjgwqdivoksocnbrmjmdatjoaxhyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789765.3948514-303-36570975556224/AnsiballZ_stat.py'
Nov 22 05:36:05 compute-0 sudo[160672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:05 compute-0 python3.9[160674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:06 compute-0 sudo[160672]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:06 compute-0 sudo[160750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhqrouzltpkyzanjaluanvdxxwgjntcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789765.3948514-303-36570975556224/AnsiballZ_file.py'
Nov 22 05:36:06 compute-0 sudo[160750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:06 compute-0 python3.9[160752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:06 compute-0 sudo[160750]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:07 compute-0 ceph-mon[75840]: pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:07 compute-0 sudo[160902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehknwovwxvbshexabjhnoszgrmsjaav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789766.7578187-315-107138586988347/AnsiballZ_systemd.py'
Nov 22 05:36:07 compute-0 sudo[160902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:07 compute-0 python3.9[160904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:07 compute-0 systemd[1]: Reloading.
Nov 22 05:36:07 compute-0 systemd-rc-local-generator[160932]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:07 compute-0 systemd-sysv-generator[160936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:07 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 05:36:07 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 05:36:07 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 05:36:07 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 05:36:07 compute-0 sudo[160902]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:08 compute-0 sudo[161096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oedxkojaydyoekoixvyumzljfypkwmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789768.1398404-325-276762165348090/AnsiballZ_file.py'
Nov 22 05:36:08 compute-0 sudo[161096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:08 compute-0 python3.9[161098]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:36:08 compute-0 sudo[161096]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:09 compute-0 ceph-mon[75840]: pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:09 compute-0 sudo[161248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxikjtcwkspfmrwnmcqkanmgdquvhrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789768.856824-333-93780099256887/AnsiballZ_stat.py'
Nov 22 05:36:09 compute-0 sudo[161248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:09 compute-0 python3.9[161250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:09 compute-0 sudo[161248]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:09 compute-0 sudo[161371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrtuawpyxugsioqvxbydrrkwhhxvigem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789768.856824-333-93780099256887/AnsiballZ_copy.py'
Nov 22 05:36:09 compute-0 sudo[161371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:10 compute-0 python3.9[161373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763789768.856824-333-93780099256887/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:36:10 compute-0 sudo[161371]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:10 compute-0 sudo[161523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdlaajfskjdgsjxoctklaiqfidlkugjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789770.4857357-350-186124696467936/AnsiballZ_file.py'
Nov 22 05:36:10 compute-0 sudo[161523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:11 compute-0 ceph-mon[75840]: pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:11 compute-0 python3.9[161525]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:36:11 compute-0 sudo[161523]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:11 compute-0 sudo[161675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elhqmefduwljqvlofbuzckoggcrwbcxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789771.3650877-358-217053751564231/AnsiballZ_stat.py'
Nov 22 05:36:11 compute-0 sudo[161675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:11 compute-0 python3.9[161677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:36:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:11 compute-0 sudo[161675]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:12 compute-0 sudo[161798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihebqsxvtjetdmplvkomjdhsaoldeunh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789771.3650877-358-217053751564231/AnsiballZ_copy.py'
Nov 22 05:36:12 compute-0 sudo[161798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:12 compute-0 python3.9[161800]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763789771.3650877-358-217053751564231/.source.json _original_basename=.uo6dppdf follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:12 compute-0 sudo[161798]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:13 compute-0 ceph-mon[75840]: pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:13 compute-0 sudo[161950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqvscymdzejuochgchrcoxcizhirscjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789772.855412-373-102333208950094/AnsiballZ_file.py'
Nov 22 05:36:13 compute-0 sudo[161950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:13 compute-0 python3.9[161952]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:13 compute-0 sudo[161950]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:14 compute-0 sudo[162102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvxkyihbijrhaocrcshqpgcvnmvbjyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789773.7263966-381-49947284036252/AnsiballZ_stat.py'
Nov 22 05:36:14 compute-0 sudo[162102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:14 compute-0 sudo[162102]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:14 compute-0 sudo[162225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmcgengpatnxffdxydgliadpenpdwoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789773.7263966-381-49947284036252/AnsiballZ_copy.py'
Nov 22 05:36:14 compute-0 sudo[162225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:14 compute-0 sudo[162225]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:15 compute-0 ceph-mon[75840]: pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:15 compute-0 sudo[162377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwawlqabblduhzshsopawgdygrghhkeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789775.3314543-398-265444063067126/AnsiballZ_container_config_data.py'
Nov 22 05:36:15 compute-0 sudo[162377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:16 compute-0 python3.9[162379]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 22 05:36:16 compute-0 sudo[162377]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:16 compute-0 sudo[162529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqtgiiuizaermoyjoxogsfvvtgaqmco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789776.3817449-407-124518194423648/AnsiballZ_container_config_hash.py'
Nov 22 05:36:16 compute-0 sudo[162529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:17 compute-0 ceph-mon[75840]: pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:17 compute-0 python3.9[162531]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 05:36:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:17 compute-0 sudo[162529]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:17 compute-0 sudo[162681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utckkxicnlgpakztalobfqnabnzgoilm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789777.4566083-416-154030091304911/AnsiballZ_podman_container_info.py'
Nov 22 05:36:17 compute-0 sudo[162681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:18 compute-0 python3.9[162683]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 05:36:18 compute-0 sudo[162681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:19 compute-0 ceph-mon[75840]: pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:19 compute-0 sudo[162861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpammujkbcgtskqddradzidihzemetdh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763789779.0028338-429-129924092830769/AnsiballZ_edpm_container_manage.py'
Nov 22 05:36:19 compute-0 sudo[162861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:19 compute-0 python3[162863]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 05:36:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:21 compute-0 ceph-mon[75840]: pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:23 compute-0 ceph-mon[75840]: pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:25 compute-0 sudo[162955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:25 compute-0 sudo[162955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:25 compute-0 sudo[162955]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:25 compute-0 sudo[162980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:36:25 compute-0 sudo[162980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:25 compute-0 sudo[162980]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:25 compute-0 sudo[163005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:25 compute-0 sudo[163005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:25 compute-0 sudo[163005]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:25 compute-0 sudo[163030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:36:25 compute-0 sudo[163030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:25 compute-0 ceph-mon[75840]: pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:26 compute-0 ceph-mon[75840]: pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:27 compute-0 podman[162942]: 2025-11-22 05:36:27.429081471 +0000 UTC m=+3.692549660 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:36:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:28 compute-0 sudo[163030]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:28 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 051145ba-8fbd-47d7-9b9a-0dc036859b9d does not exist
Nov 22 05:36:28 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8705bc8b-24fa-4c71-86ba-79481bc7ebcb does not exist
Nov 22 05:36:28 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1a9a401e-f18b-4205-8be5-48b4e1789e88 does not exist
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:36:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:36:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:36:29 compute-0 sudo[163142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:29 compute-0 sudo[163142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:29 compute-0 sudo[163142]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:29 compute-0 podman[162877]: 2025-11-22 05:36:29.059026003 +0000 UTC m=+9.076510320 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 05:36:29 compute-0 sudo[163167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:36:29 compute-0 sudo[163167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:29 compute-0 sudo[163167]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:29 compute-0 ceph-mon[75840]: pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:36:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:36:29 compute-0 sudo[163209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:29 compute-0 sudo[163209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:29 compute-0 sudo[163209]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:29 compute-0 podman[163220]: 2025-11-22 05:36:29.223010835 +0000 UTC m=+0.063874425 container create 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 05:36:29 compute-0 podman[163220]: 2025-11-22 05:36:29.186380941 +0000 UTC m=+0.027244601 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 05:36:29 compute-0 python3[162863]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 05:36:29 compute-0 sudo[163251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:36:29 compute-0 sudo[163251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:29 compute-0 sudo[162861]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.755392049 +0000 UTC m=+0.053281408 container create 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:36:29 compute-0 systemd[1]: Started libpod-conmon-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope.
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.735834348 +0000 UTC m=+0.033723737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.857615954 +0000 UTC m=+0.155505393 container init 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.869608779 +0000 UTC m=+0.167498178 container start 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.881647736 +0000 UTC m=+0.179537125 container attach 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:36:29 compute-0 vigilant_bhaskara[163466]: 167 167
Nov 22 05:36:29 compute-0 systemd[1]: libpod-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope: Deactivated successfully.
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.893768806 +0000 UTC m=+0.191658155 container died 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:36:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c214f685565a46427460a62fd6f7c0bf6096346cd5da2258596dde3221415af-merged.mount: Deactivated successfully.
Nov 22 05:36:29 compute-0 sudo[163522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrlsjthdrphokcmmfpztaaklgbdibchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789789.5946383-437-223783014726446/AnsiballZ_stat.py'
Nov 22 05:36:29 compute-0 sudo[163522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:29 compute-0 podman[163417]: 2025-11-22 05:36:29.956117808 +0000 UTC m=+0.254007197 container remove 2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:36:29 compute-0 systemd[1]: libpod-conmon-2b28793f5c25b08cd645faf59d7e0c259e0fdb1c4614e23eccf51190cf30a74b.scope: Deactivated successfully.
Nov 22 05:36:30 compute-0 podman[163536]: 2025-11-22 05:36:30.177398625 +0000 UTC m=+0.070007901 container create 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:36:30 compute-0 python3.9[163528]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:36:30 compute-0 sudo[163522]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:30 compute-0 systemd[1]: Started libpod-conmon-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope.
Nov 22 05:36:30 compute-0 podman[163536]: 2025-11-22 05:36:30.145682145 +0000 UTC m=+0.038291471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:30 compute-0 podman[163536]: 2025-11-22 05:36:30.284510264 +0000 UTC m=+0.177119600 container init 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:36:30 compute-0 podman[163536]: 2025-11-22 05:36:30.297461185 +0000 UTC m=+0.190070471 container start 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:36:30 compute-0 podman[163536]: 2025-11-22 05:36:30.301457604 +0000 UTC m=+0.194066880 container attach 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:36:30 compute-0 sudo[163709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbesgsfaklpxfwysfauirwyodqwtzvnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789790.5177894-446-180928511218370/AnsiballZ_file.py'
Nov 22 05:36:30 compute-0 sudo[163709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:31 compute-0 python3.9[163711]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:31 compute-0 sudo[163709]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 ceph-mon[75840]: pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:31 compute-0 sudo[163805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysiotycazibgwsajvvrdmdgvxueiagld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789790.5177894-446-180928511218370/AnsiballZ_stat.py'
Nov 22 05:36:31 compute-0 sudo[163805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:31 compute-0 wonderful_archimedes[163555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:36:31 compute-0 wonderful_archimedes[163555]: --> relative data size: 1.0
Nov 22 05:36:31 compute-0 wonderful_archimedes[163555]: --> All data devices are unavailable
Nov 22 05:36:31 compute-0 systemd[1]: libpod-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Deactivated successfully.
Nov 22 05:36:31 compute-0 podman[163536]: 2025-11-22 05:36:31.485715626 +0000 UTC m=+1.378324942 container died 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:36:31 compute-0 systemd[1]: libpod-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Consumed 1.084s CPU time.
Nov 22 05:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e28982868f7f25c888d4e080a768dd17cd9a638f3212c4e8aee3800f0794cbe-merged.mount: Deactivated successfully.
Nov 22 05:36:31 compute-0 podman[163536]: 2025-11-22 05:36:31.595691831 +0000 UTC m=+1.488301117 container remove 8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:36:31 compute-0 systemd[1]: libpod-conmon-8fe480cbea5178a68dcd26cfe828e8ade288e3f1b7938f8cf8055083072e1933.scope: Deactivated successfully.
Nov 22 05:36:31 compute-0 python3.9[163810]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:36:31 compute-0 sudo[163251]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 sudo[163805]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 sudo[163826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:31 compute-0 sudo[163826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:31 compute-0 sudo[163826]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 sudo[163874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:36:31 compute-0 sudo[163874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:31 compute-0 sudo[163874]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 sudo[163928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:31 compute-0 sudo[163928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:31 compute-0 sudo[163928]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:31 compute-0 sudo[163953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:36:31 compute-0 sudo[163953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:32 compute-0 sudo[164125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pevixnxcmehqdfmrrypvnzhdrtlpfkxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789791.7180717-446-20564874618636/AnsiballZ_copy.py'
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.226189109 +0000 UTC m=+0.050938184 container create 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:36:32 compute-0 sudo[164125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:32 compute-0 systemd[1]: Started libpod-conmon-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope.
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.201200271 +0000 UTC m=+0.025949386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.321076485 +0000 UTC m=+0.145825610 container init 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.333014619 +0000 UTC m=+0.157763724 container start 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.336862204 +0000 UTC m=+0.161611299 container attach 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:36:32 compute-0 youthful_ardinghelli[164133]: 167 167
Nov 22 05:36:32 compute-0 systemd[1]: libpod-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope: Deactivated successfully.
Nov 22 05:36:32 compute-0 conmon[164133]: conmon 943b83c16470d7bd04c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope/container/memory.events
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.342056974 +0000 UTC m=+0.166806069 container died 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-17f651efc525b9aace07ee2b32a35a429e0d93261306b5184fbae1d84a5e3436-merged.mount: Deactivated successfully.
Nov 22 05:36:32 compute-0 podman[164099]: 2025-11-22 05:36:32.398306352 +0000 UTC m=+0.223055447 container remove 943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:36:32 compute-0 systemd[1]: libpod-conmon-943b83c16470d7bd04c73686a60d8242882215c46cc03f892e2bc9a56e0d9079.scope: Deactivated successfully.
Nov 22 05:36:32 compute-0 python3.9[164130]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763789791.7180717-446-20564874618636/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:32 compute-0 sudo[164125]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:32 compute-0 podman[164163]: 2025-11-22 05:36:32.611739866 +0000 UTC m=+0.068005018 container create 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:36:32 compute-0 systemd[1]: Started libpod-conmon-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope.
Nov 22 05:36:32 compute-0 podman[164163]: 2025-11-22 05:36:32.582525293 +0000 UTC m=+0.038790535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:32 compute-0 podman[164163]: 2025-11-22 05:36:32.72938246 +0000 UTC m=+0.185647652 container init 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:36:32 compute-0 podman[164163]: 2025-11-22 05:36:32.746185066 +0000 UTC m=+0.202450228 container start 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:36:32 compute-0 podman[164163]: 2025-11-22 05:36:32.750309298 +0000 UTC m=+0.206574450 container attach 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:36:32 compute-0 sudo[164251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqyoovwavgwkrjtocastyrpuesnbcjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789791.7180717-446-20564874618636/AnsiballZ_systemd.py'
Nov 22 05:36:32 compute-0 sudo[164251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:33 compute-0 python3.9[164253]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:36:33 compute-0 systemd[1]: Reloading.
Nov 22 05:36:33 compute-0 ceph-mon[75840]: pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:33 compute-0 systemd-rc-local-generator[164275]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:33 compute-0 systemd-sysv-generator[164278]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:33 compute-0 sudo[164251]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:33 compute-0 eager_ganguly[164220]: {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     "0": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "devices": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "/dev/loop3"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             ],
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_name": "ceph_lv0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_size": "21470642176",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "name": "ceph_lv0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "tags": {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_name": "ceph",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.crush_device_class": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.encrypted": "0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_id": "0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.vdo": "0"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             },
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "vg_name": "ceph_vg0"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         }
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     ],
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     "1": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "devices": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "/dev/loop4"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             ],
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_name": "ceph_lv1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_size": "21470642176",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "name": "ceph_lv1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "tags": {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_name": "ceph",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.crush_device_class": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.encrypted": "0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_id": "1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.vdo": "0"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             },
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "vg_name": "ceph_vg1"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         }
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     ],
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     "2": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "devices": [
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "/dev/loop5"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             ],
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_name": "ceph_lv2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_size": "21470642176",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "name": "ceph_lv2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "tags": {
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.cluster_name": "ceph",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.crush_device_class": "",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.encrypted": "0",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osd_id": "2",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:                 "ceph.vdo": "0"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             },
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "type": "block",
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:             "vg_name": "ceph_vg2"
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:         }
Nov 22 05:36:33 compute-0 eager_ganguly[164220]:     ]
Nov 22 05:36:33 compute-0 eager_ganguly[164220]: }
Nov 22 05:36:33 compute-0 systemd[1]: libpod-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope: Deactivated successfully.
Nov 22 05:36:33 compute-0 podman[164163]: 2025-11-22 05:36:33.492019405 +0000 UTC m=+0.948284527 container died 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb29f6cfde1d684741cfd63f68acbfaf9280e06225f9a1351f51198744d14c71-merged.mount: Deactivated successfully.
Nov 22 05:36:33 compute-0 podman[164163]: 2025-11-22 05:36:33.554443029 +0000 UTC m=+1.010708161 container remove 65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ganguly, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:36:33 compute-0 systemd[1]: libpod-conmon-65c375cffa4123367cf5f29bff72fc289e13ca0637073ae0f9d445cfe11d3f95.scope: Deactivated successfully.
Nov 22 05:36:33 compute-0 sudo[163953]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:33 compute-0 sudo[164328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:33 compute-0 sudo[164328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:33 compute-0 sudo[164328]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:33 compute-0 sudo[164361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:36:33 compute-0 sudo[164361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:33 compute-0 sudo[164361]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:33 compute-0 sudo[164445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltdkdztakclpkdoegosgklexhcgqvtso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789791.7180717-446-20564874618636/AnsiballZ_systemd.py'
Nov 22 05:36:33 compute-0 sudo[164445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:33 compute-0 sudo[164414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:33 compute-0 sudo[164414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:33 compute-0 sudo[164414]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:33 compute-0 sudo[164458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:36:33 compute-0 sudo[164458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:34 compute-0 python3.9[164453]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:34 compute-0 systemd[1]: Reloading.
Nov 22 05:36:34 compute-0 systemd-rc-local-generator[164547]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:34 compute-0 systemd-sysv-generator[164551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.382026518 +0000 UTC m=+0.027674942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.475912096 +0000 UTC m=+0.121560440 container create e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:36:34 compute-0 systemd[1]: Started libpod-conmon-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope.
Nov 22 05:36:34 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 22 05:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:34 compute-0 sshd-session[164455]: Invalid user solana from 80.94.92.166 port 53894
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.690112442 +0000 UTC m=+0.335760856 container init e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.701883442 +0000 UTC m=+0.347531806 container start e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.706644871 +0000 UTC m=+0.352293245 container attach e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:36:34 compute-0 xenodochial_babbage[164577]: 167 167
Nov 22 05:36:34 compute-0 systemd[1]: libpod-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope: Deactivated successfully.
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.713367434 +0000 UTC m=+0.359015828 container died e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-58b651d861bd338607d7b2c8c64cb741e1c878f0251a60e870c4e8bc4855a770-merged.mount: Deactivated successfully.
Nov 22 05:36:34 compute-0 podman[164558]: 2025-11-22 05:36:34.780932958 +0000 UTC m=+0.426581342 container remove e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:36:34 compute-0 systemd[1]: libpod-conmon-e8c8305351da1a6af2a5f61ac30c33d89cb0ad4d72ba1e8cdcaa6cad43c889ce.scope: Deactivated successfully.
Nov 22 05:36:34 compute-0 sshd-session[164455]: Connection closed by invalid user solana 80.94.92.166 port 53894 [preauth]
Nov 22 05:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e95c6c87860fb545097cf97bf3b7c73122bf952ef49bebf99f3689a8c83d8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f9e95c6c87860fb545097cf97bf3b7c73122bf952ef49bebf99f3689a8c83d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c.
Nov 22 05:36:34 compute-0 podman[164581]: 2025-11-22 05:36:34.896869485 +0000 UTC m=+0.296324336 container init 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:36:34 compute-0 ovn_metadata_agent[164613]: + sudo -E kolla_set_configs
Nov 22 05:36:34 compute-0 podman[164581]: 2025-11-22 05:36:34.93905137 +0000 UTC m=+0.338506161 container start 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:36:34 compute-0 edpm-start-podman-container[164581]: ovn_metadata_agent
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Validating config file
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Copying service configuration files
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Writing out command to execute
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: ++ cat /run_command
Nov 22 05:36:35 compute-0 podman[164627]: 2025-11-22 05:36:35.035753906 +0000 UTC m=+0.066804085 container create af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + CMD=neutron-ovn-metadata-agent
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + ARGS=
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + sudo kolla_copy_cacerts
Nov 22 05:36:35 compute-0 edpm-start-podman-container[164579]: Creating additional drop-in dependency for "ovn_metadata_agent" (0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c)
Nov 22 05:36:35 compute-0 podman[164621]: 2025-11-22 05:36:35.048264615 +0000 UTC m=+0.098203037 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + [[ ! -n '' ]]
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + . kolla_extend_start
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: Running command: 'neutron-ovn-metadata-agent'
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + umask 0022
Nov 22 05:36:35 compute-0 ovn_metadata_agent[164613]: + exec neutron-ovn-metadata-agent
Nov 22 05:36:35 compute-0 systemd[1]: Started libpod-conmon-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope.
Nov 22 05:36:35 compute-0 systemd[1]: Reloading.
Nov 22 05:36:35 compute-0 podman[164627]: 2025-11-22 05:36:35.010357986 +0000 UTC m=+0.041408175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:36:35 compute-0 ceph-mon[75840]: pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:35 compute-0 systemd-rc-local-generator[164710]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:35 compute-0 systemd-sysv-generator[164714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:35 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 22 05:36:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:36:35 compute-0 sudo[164445]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:35 compute-0 podman[164627]: 2025-11-22 05:36:35.412803022 +0000 UTC m=+0.443853191 container init af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:36:35 compute-0 podman[164627]: 2025-11-22 05:36:35.427703197 +0000 UTC m=+0.458753346 container start af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:36:35 compute-0 podman[164627]: 2025-11-22 05:36:35.43186073 +0000 UTC m=+0.462910969 container attach af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:36:35 compute-0 sshd-session[155465]: Connection closed by 192.168.122.30 port 53028
Nov 22 05:36:35 compute-0 sshd-session[155462]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:36:35 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 22 05:36:35 compute-0 systemd[1]: session-47.scope: Consumed 1min 1.435s CPU time.
Nov 22 05:36:35 compute-0 systemd-logind[798]: Session 47 logged out. Waiting for processes to exit.
Nov 22 05:36:35 compute-0 systemd-logind[798]: Removed session 47.
Nov 22 05:36:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:36 compute-0 silly_engelbart[164687]: {
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_id": 1,
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "type": "bluestore"
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     },
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_id": 2,
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "type": "bluestore"
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     },
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_id": 0,
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:         "type": "bluestore"
Nov 22 05:36:36 compute-0 silly_engelbart[164687]:     }
Nov 22 05:36:36 compute-0 silly_engelbart[164687]: }
Nov 22 05:36:36 compute-0 systemd[1]: libpod-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Deactivated successfully.
Nov 22 05:36:36 compute-0 podman[164627]: 2025-11-22 05:36:36.453164848 +0000 UTC m=+1.484215027 container died af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:36:36 compute-0 systemd[1]: libpod-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Consumed 1.027s CPU time.
Nov 22 05:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6103c17a3a328b697d2e8570d99d5c34d6354b42b4d5ae8394c4868dece33c1-merged.mount: Deactivated successfully.
Nov 22 05:36:36 compute-0 podman[164627]: 2025-11-22 05:36:36.536066968 +0000 UTC m=+1.567117147 container remove af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:36:36 compute-0 systemd[1]: libpod-conmon-af14cfbc351d162794fd1047f810076d13d860b9f7f02664e0ceb30cdeda2edb.scope: Deactivated successfully.
Nov 22 05:36:36 compute-0 sudo[164458]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:36:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:36:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:36 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3c5e8a40-8183-4283-8b9d-415ca50000a4 does not exist
Nov 22 05:36:36 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 96bae96a-793e-45c8-90cb-941347e27b22 does not exist
Nov 22 05:36:36 compute-0 sudo[164793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:36:36 compute-0 sudo[164793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:36 compute-0 sudo[164793]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:36 compute-0 sudo[164818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:36:36 compute-0 sudo[164818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:36:36 compute-0 sudo[164818]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 INFO neutron.common.config [-] Logging enabled!
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.854 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.855 164618 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.856 164618 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.857 164618 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.858 164618 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.859 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.860 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.861 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.862 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.863 164618 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.864 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.865 164618 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.866 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.867 164618 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.868 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.869 164618 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.870 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.871 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.872 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.873 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.874 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.875 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.876 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.877 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.878 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.879 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.880 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.881 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.882 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.883 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.884 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.885 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.886 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.887 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.888 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.889 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.890 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.891 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.892 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.893 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.894 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.895 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.896 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.897 164618 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.898 164618 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.915 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.915 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.916 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.916 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.917 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 22 05:36:36 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.939 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 772af8e6-0f26-443e-a044-9109439e729d (UUID: 772af8e6-0f26-443e-a044-9109439e729d) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.974 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.975 164618 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.978 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.984 164618 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.989 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '772af8e6-0f26-443e-a044-9109439e729d'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa14a62baf0>], external_ids={}, name=772af8e6-0f26-443e-a044-9109439e729d, nb_cfg_timestamp=1763789731621, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.991 164618 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa14a62fb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.991 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.992 164618 INFO oslo_service.service [-] Starting 1 workers
Nov 22 05:36:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:36.997 164618 DEBUG oslo_service.service [-] Started child 164844 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.000 164618 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp5_2ddpk5/privsep.sock']
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.001 164844 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-953242'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.031 164844 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.031 164844 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.032 164844 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.036 164844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.047 164844 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.056 164844 INFO eventlet.wsgi.server [-] (164844) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 22 05:36:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:37 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 22 05:36:37 compute-0 ceph-mon[75840]: pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:37 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:37 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.673 164618 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.674 164618 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp5_2ddpk5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.538 164849 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.543 164849 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.545 164849 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.546 164849 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164849
Nov 22 05:36:37 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:37.678 164849 DEBUG oslo.privsep.daemon [-] privsep: reply[4adfa083-ec3e-4a39-a723-52f67de2216f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 05:36:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.190 164849 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:36:38 compute-0 ceph-mon[75840]: pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.697 164849 DEBUG oslo.privsep.daemon [-] privsep: reply[07917ce7-9923-4d40-80e8-e61a7fb0de57]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.701 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, column=external_ids, values=({'neutron:ovn-metadata-id': 'd37bcddf-b93a-5e8c-a505-2020a426b129'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.714 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.724 164618 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.725 164618 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.726 164618 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.727 164618 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.728 164618 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.729 164618 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.730 164618 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.731 164618 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.732 164618 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.733 164618 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.734 164618 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.735 164618 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.736 164618 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.737 164618 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.738 164618 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.739 164618 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.740 164618 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.741 164618 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.742 164618 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.743 164618 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.744 164618 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.745 164618 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.746 164618 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.747 164618 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.748 164618 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.749 164618 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.750 164618 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.751 164618 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.752 164618 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.753 164618 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.754 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.755 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.756 164618 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.757 164618 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.758 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.759 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.760 164618 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.761 164618 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.762 164618 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.763 164618 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.764 164618 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.765 164618 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.766 164618 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.767 164618 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.768 164618 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.769 164618 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.770 164618 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.771 164618 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.772 164618 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.773 164618 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.774 164618 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.775 164618 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.776 164618 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.777 164618 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.778 164618 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.779 164618 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.780 164618 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.781 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.782 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.783 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.784 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.785 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:36:38 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:36:38.786 164618 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 05:36:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:40 compute-0 ceph-mon[75840]: pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:41 compute-0 sshd-session[164854]: Accepted publickey for zuul from 192.168.122.30 port 46772 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:36:41 compute-0 systemd-logind[798]: New session 48 of user zuul.
Nov 22 05:36:41 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 22 05:36:41 compute-0 sshd-session[164854]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:36:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:42 compute-0 python3.9[165007]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:36:43 compute-0 ceph-mon[75840]: pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:36:43
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'volumes', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:36:43 compute-0 sudo[165161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdzirwesfsyvolrwwyapmmculkdmuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789803.3396654-34-197252094749708/AnsiballZ_command.py'
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:36:43 compute-0 sudo[165161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:44 compute-0 python3.9[165163]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:36:44 compute-0 sudo[165161]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:45 compute-0 ceph-mon[75840]: pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:45 compute-0 sudo[165326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzuetucfppcliclhuiquorusgcmvvqmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789804.6120596-45-209279965022238/AnsiballZ_systemd_service.py'
Nov 22 05:36:45 compute-0 sudo[165326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:45 compute-0 python3.9[165328]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:36:45 compute-0 systemd[1]: Reloading.
Nov 22 05:36:45 compute-0 systemd-sysv-generator[165360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:36:45 compute-0 systemd-rc-local-generator[165356]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:36:45 compute-0 sudo[165326]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:46 compute-0 python3.9[165513]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:36:46 compute-0 network[165530]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:36:46 compute-0 network[165531]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:36:46 compute-0 network[165532]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:36:47 compute-0 ceph-mon[75840]: pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:49 compute-0 ceph-mon[75840]: pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:51 compute-0 sudo[165792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjmxnyjdbfkacbfzienyejwezdnrmfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789810.6869082-64-261134640370457/AnsiballZ_systemd_service.py'
Nov 22 05:36:51 compute-0 sudo[165792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:51 compute-0 ceph-mon[75840]: pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:51 compute-0 python3.9[165794]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:51 compute-0 sudo[165792]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:51 compute-0 sudo[165945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylgfmbuqtjjwhjmujhigbfbrkwzsturc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789811.5999882-64-269094947967649/AnsiballZ_systemd_service.py'
Nov 22 05:36:51 compute-0 sudo[165945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:52 compute-0 python3.9[165947]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:52 compute-0 sudo[165945]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:36:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:36:52 compute-0 sudo[166098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdlaeeyyshiyvahsoqnssbpokrrlvczg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789812.5452058-64-186835061834074/AnsiballZ_systemd_service.py'
Nov 22 05:36:52 compute-0 sudo[166098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:53 compute-0 ceph-mon[75840]: pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:53 compute-0 python3.9[166100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:53 compute-0 sudo[166098]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:53 compute-0 sudo[166251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiifhwiyihlgxnwbzvhyfsjkfqidqyho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789813.5116708-64-132901987062630/AnsiballZ_systemd_service.py'
Nov 22 05:36:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:53 compute-0 sudo[166251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:54 compute-0 python3.9[166253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:54 compute-0 sudo[166251]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:54 compute-0 sudo[166404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyrlxsvbsaejgkknpkajrprtynbwlvtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789814.445671-64-2543558033073/AnsiballZ_systemd_service.py'
Nov 22 05:36:54 compute-0 sudo[166404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:55 compute-0 ceph-mon[75840]: pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:55 compute-0 python3.9[166406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:55 compute-0 sudo[166404]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:55 compute-0 sudo[166557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-visehxfujirlfwdmxautbkcfyemoljqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789815.3577294-64-168344826709750/AnsiballZ_systemd_service.py'
Nov 22 05:36:55 compute-0 sudo[166557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:56 compute-0 python3.9[166559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:56 compute-0 sudo[166557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:56 compute-0 sudo[166710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomokdivppjmsycfbwrnjcuwlwbtfxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789816.3253706-64-107135343818824/AnsiballZ_systemd_service.py'
Nov 22 05:36:56 compute-0 sudo[166710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:57 compute-0 python3.9[166712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:36:57 compute-0 sudo[166710]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:57 compute-0 ceph-mon[75840]: pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:36:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:58 compute-0 sudo[166863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtzzrzbltpdvmbkovruatrjyfflgqoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789817.4920952-116-272156496290436/AnsiballZ_file.py'
Nov 22 05:36:58 compute-0 sudo[166863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:58 compute-0 python3.9[166865]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:58 compute-0 sudo[166863]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:58 compute-0 sudo[167032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnxjdcpspmjndmhymakvwhkrsybvcrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789818.4459233-116-15196529159689/AnsiballZ_file.py'
Nov 22 05:36:58 compute-0 sudo[167032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:58 compute-0 podman[166989]: 2025-11-22 05:36:58.794035583 +0000 UTC m=+0.120063320 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 22 05:36:58 compute-0 python3.9[167037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:58 compute-0 sudo[167032]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:59 compute-0 ceph-mon[75840]: pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:36:59 compute-0 sudo[167193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjbsbqczdxuyejrbjxulizyihwiogmur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789819.157771-116-45088159673652/AnsiballZ_file.py'
Nov 22 05:36:59 compute-0 sudo[167193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:36:59 compute-0 python3.9[167195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:36:59 compute-0 sudo[167193]: pam_unix(sudo:session): session closed for user root
Nov 22 05:36:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:00 compute-0 sudo[167345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizzbbvnqwhfcnktgbqmfndztipstjoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789819.946166-116-218192433629656/AnsiballZ_file.py'
Nov 22 05:37:00 compute-0 sudo[167345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:00 compute-0 python3.9[167347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:00 compute-0 sudo[167345]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:01 compute-0 sudo[167497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeheqsdgzhmhpouorcovzqwfaqmmilnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789820.7818313-116-134025739150143/AnsiballZ_file.py'
Nov 22 05:37:01 compute-0 sudo[167497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:01 compute-0 ceph-mon[75840]: pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:01 compute-0 python3.9[167499]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:01 compute-0 sudo[167497]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:02 compute-0 sudo[167649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trtbuljyfsrycveyxaglkzexxplykpkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789821.6560552-116-98557404857149/AnsiballZ_file.py'
Nov 22 05:37:02 compute-0 sudo[167649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:02 compute-0 python3.9[167651]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:02 compute-0 sudo[167649]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:02 compute-0 sudo[167801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-derxpkmgpzwktgwjudeqznygzhzcaecq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789822.5093255-116-173653004649406/AnsiballZ_file.py'
Nov 22 05:37:02 compute-0 sudo[167801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:03 compute-0 python3.9[167803]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:03 compute-0 sudo[167801]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:03 compute-0 ceph-mon[75840]: pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:03 compute-0 sudo[167953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niswhpgmjuidrrardfmeowmenxadopkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789823.295344-166-27162903850610/AnsiballZ_file.py'
Nov 22 05:37:03 compute-0 sudo[167953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:03 compute-0 python3.9[167955]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:03 compute-0 sudo[167953]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:04 compute-0 sudo[168105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imriqltuyxytzmzlnjvtyurypzuygmzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789824.0629234-166-52511320250801/AnsiballZ_file.py'
Nov 22 05:37:04 compute-0 sudo[168105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:04 compute-0 python3.9[168107]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:04 compute-0 sudo[168105]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:05 compute-0 sudo[168271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqaalbfomlvpiezugbgfxblsjvsmkbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789824.8289964-166-210352242937372/AnsiballZ_file.py'
Nov 22 05:37:05 compute-0 sudo[168271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:05 compute-0 podman[168231]: 2025-11-22 05:37:05.261971232 +0000 UTC m=+0.112103045 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 05:37:05 compute-0 ceph-mon[75840]: pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:05 compute-0 python3.9[168279]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:05 compute-0 sudo[168271]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:06 compute-0 sudo[168429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjbkjcneejzjtlkcovlecoeysswbrxrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789825.6186173-166-150942927354815/AnsiballZ_file.py'
Nov 22 05:37:06 compute-0 sudo[168429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:06 compute-0 python3.9[168431]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:06 compute-0 sudo[168429]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:06 compute-0 sudo[168581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jleosnrgoddpkhyjsmmvgsiluuelkkop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789826.406971-166-86563739290143/AnsiballZ_file.py'
Nov 22 05:37:06 compute-0 sudo[168581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:06 compute-0 python3.9[168583]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:06 compute-0 sudo[168581]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:07 compute-0 ceph-mon[75840]: pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:07 compute-0 sudo[168733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shxscrnhajowzorzzxrprdbpmylqbqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789827.1045291-166-34901853136968/AnsiballZ_file.py'
Nov 22 05:37:07 compute-0 sudo[168733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:07 compute-0 python3.9[168735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:07 compute-0 sudo[168733]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:08 compute-0 sudo[168885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypitrnuexsvwzjtugufcmebzemsxbid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789827.8459876-166-68097408731623/AnsiballZ_file.py'
Nov 22 05:37:08 compute-0 sudo[168885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:08 compute-0 python3.9[168887]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:37:08 compute-0 sudo[168885]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:08 compute-0 sudo[169037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viydieukqtixwempfepntbmwpcycyafh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789828.6632063-217-92696449891012/AnsiballZ_command.py'
Nov 22 05:37:08 compute-0 sudo[169037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:09 compute-0 python3.9[169039]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:09 compute-0 sudo[169037]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:09 compute-0 ceph-mon[75840]: pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:10 compute-0 python3.9[169191]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:37:10 compute-0 sudo[169341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jirvcmocjpuhgmqebronndfeboygmsej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789830.3907108-235-65224731110645/AnsiballZ_systemd_service.py'
Nov 22 05:37:10 compute-0 sudo[169341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:11 compute-0 python3.9[169343]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:37:11 compute-0 systemd[1]: Reloading.
Nov 22 05:37:11 compute-0 systemd-rc-local-generator[169367]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:37:11 compute-0 systemd-sysv-generator[169373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:37:11 compute-0 sudo[169341]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:11 compute-0 ceph-mon[75840]: pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:12 compute-0 sudo[169528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbdqsrhzcujcfcvsfjbslclpmopagycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789831.692223-243-104103173853787/AnsiballZ_command.py'
Nov 22 05:37:12 compute-0 sudo[169528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:12 compute-0 python3.9[169530]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:12 compute-0 sudo[169528]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:12 compute-0 sudo[169681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obqyptfthogaxrmkpwuwymwzetbtookz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789832.438471-243-8570271490897/AnsiballZ_command.py'
Nov 22 05:37:12 compute-0 sudo[169681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:12 compute-0 python3.9[169683]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:13 compute-0 sudo[169681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:13 compute-0 ceph-mon[75840]: pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:13 compute-0 sudo[169834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdyhynjuinodryojsrqaofspjifawtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789833.1901727-243-251618545315498/AnsiballZ_command.py'
Nov 22 05:37:13 compute-0 sudo[169834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:13 compute-0 python3.9[169836]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:13 compute-0 sudo[169834]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:14 compute-0 sudo[169987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzimnhgxlyemnxsbgwppupiaaknzxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789833.9963942-243-189356147699894/AnsiballZ_command.py'
Nov 22 05:37:14 compute-0 sudo[169987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:14 compute-0 python3.9[169989]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:14 compute-0 sudo[169987]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:15 compute-0 sudo[170140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnogxjuzjtzokhatqnzuivvkgbobfzux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789834.7767699-243-237618128844674/AnsiballZ_command.py'
Nov 22 05:37:15 compute-0 sudo[170140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:15 compute-0 python3.9[170142]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:15 compute-0 sudo[170140]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:15 compute-0 ceph-mon[75840]: pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 22 05:37:16 compute-0 sudo[170293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suudtoxfnejruehitpekkyazdlfhvoxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789835.6371272-243-91164355473674/AnsiballZ_command.py'
Nov 22 05:37:16 compute-0 sudo[170293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:16 compute-0 python3.9[170295]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:16 compute-0 sudo[170293]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:16 compute-0 sudo[170446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqhtpfzwsjckjtnioptavdivofqsmbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789836.43069-243-261007874354128/AnsiballZ_command.py'
Nov 22 05:37:16 compute-0 sudo[170446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:16 compute-0 python3.9[170448]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:37:16 compute-0 sudo[170446]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:17 compute-0 ceph-mon[75840]: pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 22 05:37:17 compute-0 sudo[170599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otqlxgfghbgygiqcluggliyibwvrfvqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789837.3743942-297-82273145782199/AnsiballZ_getent.py'
Nov 22 05:37:17 compute-0 sudo[170599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 05:37:18 compute-0 python3.9[170601]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 22 05:37:18 compute-0 sudo[170599]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:18 compute-0 sudo[170752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjjaodvjixydkxleshhaknbcwxwyoao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789838.237618-305-76829191959540/AnsiballZ_group.py'
Nov 22 05:37:18 compute-0 sudo[170752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:18 compute-0 python3.9[170754]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:37:18 compute-0 groupadd[170755]: group added to /etc/group: name=libvirt, GID=42473
Nov 22 05:37:18 compute-0 groupadd[170755]: group added to /etc/gshadow: name=libvirt
Nov 22 05:37:18 compute-0 groupadd[170755]: new group: name=libvirt, GID=42473
Nov 22 05:37:18 compute-0 sudo[170752]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:19 compute-0 ceph-mon[75840]: pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 05:37:19 compute-0 sudo[170910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwepfhvnantvytqcihtwnbmbxjohhnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789839.136864-313-237958531614739/AnsiballZ_user.py'
Nov 22 05:37:19 compute-0 sudo[170910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:19 compute-0 python3.9[170912]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 05:37:19 compute-0 useradd[170914]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 05:37:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 05:37:19 compute-0 sudo[170910]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:20 compute-0 sudo[171070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpeempucuqsngbhxjkphoqsnsanantgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789840.3102963-324-104743623935560/AnsiballZ_setup.py'
Nov 22 05:37:20 compute-0 sudo[171070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:20 compute-0 python3.9[171072]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:37:21 compute-0 sudo[171070]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:21 compute-0 ceph-mon[75840]: pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 05:37:21 compute-0 sudo[171154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayctqibeztpbwefdxfauhyhsbsfmpikm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789840.3102963-324-104743623935560/AnsiballZ_dnf.py'
Nov 22 05:37:21 compute-0 sudo[171154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:37:21 compute-0 python3.9[171156]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:37:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:23 compute-0 ceph-mon[75840]: pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:25 compute-0 ceph-mon[75840]: pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:27 compute-0 ceph-mon[75840]: pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:37:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 05:37:29 compute-0 podman[171167]: 2025-11-22 05:37:29.251030895 +0000 UTC m=+0.105136625 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:37:29 compute-0 ceph-mon[75840]: pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Nov 22 05:37:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 05:37:30 compute-0 ceph-mon[75840]: pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 05:37:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 05:37:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:33 compute-0 ceph-mon[75840]: pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Nov 22 05:37:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:35 compute-0 ceph-mon[75840]: pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:36 compute-0 podman[171365]: 2025-11-22 05:37:36.222599806 +0000 UTC m=+0.072267977 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:37:36 compute-0 sudo[171386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:36 compute-0 sudo[171386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:36 compute-0 sudo[171386]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.899 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:37:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:37:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:37:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:37:36 compute-0 sudo[171411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:37:36 compute-0 sudo[171411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:36 compute-0 sudo[171411]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:37 compute-0 sudo[171436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:37 compute-0 sudo[171436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:37 compute-0 sudo[171436]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:37 compute-0 ceph-mon[75840]: pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:37 compute-0 sudo[171461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:37:37 compute-0 sudo[171461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:37 compute-0 podman[171559]: 2025-11-22 05:37:37.681645401 +0000 UTC m=+0.096651793 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:37:37 compute-0 podman[171559]: 2025-11-22 05:37:37.769810972 +0000 UTC m=+0.184817294 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:37:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:38 compute-0 sudo[171461]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:37:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:37:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:38 compute-0 sudo[171719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:38 compute-0 sudo[171719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:38 compute-0 sudo[171719]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:38 compute-0 sudo[171744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:37:38 compute-0 sudo[171744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:38 compute-0 sudo[171744]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:38 compute-0 sudo[171769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:38 compute-0 sudo[171769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:38 compute-0 sudo[171769]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:38 compute-0 sudo[171794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:37:38 compute-0 sudo[171794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:39 compute-0 sudo[171794]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f3f03c33-a34e-4aa7-a034-2fccd98c53fb does not exist
Nov 22 05:37:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1bdb6bbc-c31c-45d9-be98-c5fd335178bf does not exist
Nov 22 05:37:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 18b947d5-6b73-4a60-aeb1-dfc607c3a9c6 does not exist
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:37:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:37:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:37:39 compute-0 sudo[171850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:39 compute-0 sudo[171850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:39 compute-0 sudo[171850]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:39 compute-0 sudo[171875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:37:39 compute-0 sudo[171875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:39 compute-0 sudo[171875]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:39 compute-0 sudo[171900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:39 compute-0 sudo[171900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:39 compute-0 sudo[171900]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:39 compute-0 sudo[171925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:37:39 compute-0 sudo[171925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.213917625 +0000 UTC m=+0.072149153 container create 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:37:40 compute-0 systemd[1]: Started libpod-conmon-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope.
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.186122375 +0000 UTC m=+0.044353943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.309828987 +0000 UTC m=+0.168060585 container init 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.321188147 +0000 UTC m=+0.179419635 container start 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.32458819 +0000 UTC m=+0.182819718 container attach 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:40 compute-0 interesting_lamport[172007]: 167 167
Nov 22 05:37:40 compute-0 systemd[1]: libpod-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope: Deactivated successfully.
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.331292523 +0000 UTC m=+0.189524061 container died 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3cebe32bbe5a7eef2d8f07686f8201b0ed749b2f6dff6869a5ab474fa8aff1-merged.mount: Deactivated successfully.
Nov 22 05:37:40 compute-0 podman[171989]: 2025-11-22 05:37:40.390877643 +0000 UTC m=+0.249109141 container remove 382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 05:37:40 compute-0 systemd[1]: libpod-conmon-382bbbdd78d0b30da9418d1fc726256e8d1247cfffeccb06ab5db7e5771b9bea.scope: Deactivated successfully.
Nov 22 05:37:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:37:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:37:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:37:40 compute-0 podman[172031]: 2025-11-22 05:37:40.649879553 +0000 UTC m=+0.069505821 container create 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:37:40 compute-0 systemd[1]: Started libpod-conmon-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope.
Nov 22 05:37:40 compute-0 podman[172031]: 2025-11-22 05:37:40.622972537 +0000 UTC m=+0.042598865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:40 compute-0 podman[172031]: 2025-11-22 05:37:40.771149337 +0000 UTC m=+0.190775645 container init 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:37:40 compute-0 podman[172031]: 2025-11-22 05:37:40.789396166 +0000 UTC m=+0.209022424 container start 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:37:40 compute-0 podman[172031]: 2025-11-22 05:37:40.793453097 +0000 UTC m=+0.213079425 container attach 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:37:41 compute-0 ceph-mon[75840]: pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:41 compute-0 vibrant_chatelet[172049]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:37:41 compute-0 vibrant_chatelet[172049]: --> relative data size: 1.0
Nov 22 05:37:41 compute-0 vibrant_chatelet[172049]: --> All data devices are unavailable
Nov 22 05:37:41 compute-0 systemd[1]: libpod-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Deactivated successfully.
Nov 22 05:37:41 compute-0 systemd[1]: libpod-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Consumed 1.045s CPU time.
Nov 22 05:37:41 compute-0 podman[172031]: 2025-11-22 05:37:41.896986014 +0000 UTC m=+1.316612292 container died 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e44721c6f9feab7ed82f5e29ca7661a3a7da219ca60eb6e9f5f2eda67b9a3197-merged.mount: Deactivated successfully.
Nov 22 05:37:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:41 compute-0 podman[172031]: 2025-11-22 05:37:41.997643516 +0000 UTC m=+1.417269774 container remove 6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:37:42 compute-0 systemd[1]: libpod-conmon-6a0a498ea1b21801f62da3e8807062cde4ee4a923cb97cf9a00b8d291e74dc4c.scope: Deactivated successfully.
Nov 22 05:37:42 compute-0 sudo[171925]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:42 compute-0 sudo[172093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:42 compute-0 sudo[172093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:42 compute-0 sudo[172093]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:42 compute-0 sudo[172119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:37:42 compute-0 sudo[172119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:42 compute-0 sudo[172119]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:42 compute-0 sudo[172144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:42 compute-0 sudo[172144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:42 compute-0 sudo[172144]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:42 compute-0 sudo[172171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:37:42 compute-0 sudo[172171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:42 compute-0 sshd-session[172151]: Invalid user sol from 80.94.92.182 port 58522
Nov 22 05:37:42 compute-0 podman[172237]: 2025-11-22 05:37:42.911316713 +0000 UTC m=+0.061865822 container create d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:37:42 compute-0 systemd[1]: Started libpod-conmon-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope.
Nov 22 05:37:42 compute-0 podman[172237]: 2025-11-22 05:37:42.888052977 +0000 UTC m=+0.038602086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:43 compute-0 podman[172237]: 2025-11-22 05:37:43.007096181 +0000 UTC m=+0.157645350 container init d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:37:43 compute-0 podman[172237]: 2025-11-22 05:37:43.019294734 +0000 UTC m=+0.169843833 container start d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:37:43 compute-0 podman[172237]: 2025-11-22 05:37:43.023338355 +0000 UTC m=+0.173887474 container attach d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:37:43 compute-0 systemd[1]: libpod-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope: Deactivated successfully.
Nov 22 05:37:43 compute-0 stoic_blackburn[172253]: 167 167
Nov 22 05:37:43 compute-0 conmon[172253]: conmon d65a4a01d945cc4b2865 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope/container/memory.events
Nov 22 05:37:43 compute-0 podman[172237]: 2025-11-22 05:37:43.028100955 +0000 UTC m=+0.178650054 container died d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-bba2544b685943f6eece6f45be3ef33d641e46f29fcd3e1fb61d2b7d8db1fe03-merged.mount: Deactivated successfully.
Nov 22 05:37:43 compute-0 podman[172237]: 2025-11-22 05:37:43.086786539 +0000 UTC m=+0.237335618 container remove d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackburn, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:37:43 compute-0 systemd[1]: libpod-conmon-d65a4a01d945cc4b2865f382a44f08982c8af02d5f3c1d5e3b968fea0f26e441.scope: Deactivated successfully.
Nov 22 05:37:43 compute-0 sshd-session[172151]: Connection closed by invalid user sol 80.94.92.182 port 58522 [preauth]
Nov 22 05:37:43 compute-0 podman[172277]: 2025-11-22 05:37:43.289199653 +0000 UTC m=+0.065539463 container create 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:37:43 compute-0 systemd[1]: Started libpod-conmon-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope.
Nov 22 05:37:43 compute-0 podman[172277]: 2025-11-22 05:37:43.261197257 +0000 UTC m=+0.037537167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:43 compute-0 podman[172277]: 2025-11-22 05:37:43.403362913 +0000 UTC m=+0.179702763 container init 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:37:43 compute-0 podman[172277]: 2025-11-22 05:37:43.409768899 +0000 UTC m=+0.186108739 container start 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:37:43 compute-0 podman[172277]: 2025-11-22 05:37:43.413058508 +0000 UTC m=+0.189398348 container attach 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:43 compute-0 ceph-mon[75840]: pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:37:43
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:37:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:44 compute-0 relaxed_easley[172294]: {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     "0": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "devices": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "/dev/loop3"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             ],
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_name": "ceph_lv0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_size": "21470642176",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "name": "ceph_lv0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "tags": {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_name": "ceph",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.crush_device_class": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.encrypted": "0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_id": "0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.vdo": "0"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             },
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "vg_name": "ceph_vg0"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         }
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     ],
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     "1": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "devices": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "/dev/loop4"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             ],
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_name": "ceph_lv1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_size": "21470642176",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "name": "ceph_lv1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "tags": {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_name": "ceph",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.crush_device_class": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.encrypted": "0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_id": "1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.vdo": "0"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             },
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "vg_name": "ceph_vg1"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         }
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     ],
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     "2": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "devices": [
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "/dev/loop5"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             ],
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_name": "ceph_lv2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_size": "21470642176",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "name": "ceph_lv2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "tags": {
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.cluster_name": "ceph",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.crush_device_class": "",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.encrypted": "0",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osd_id": "2",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:                 "ceph.vdo": "0"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             },
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "type": "block",
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:             "vg_name": "ceph_vg2"
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:         }
Nov 22 05:37:44 compute-0 relaxed_easley[172294]:     ]
Nov 22 05:37:44 compute-0 relaxed_easley[172294]: }
Nov 22 05:37:44 compute-0 systemd[1]: libpod-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope: Deactivated successfully.
Nov 22 05:37:44 compute-0 podman[172277]: 2025-11-22 05:37:44.230419492 +0000 UTC m=+1.006759352 container died 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e9fc01227d76a0bd366b7299bf6d27c304327b8edeb46400295aa0ccebe1579-merged.mount: Deactivated successfully.
Nov 22 05:37:44 compute-0 podman[172277]: 2025-11-22 05:37:44.313997387 +0000 UTC m=+1.090337247 container remove 03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:44 compute-0 systemd[1]: libpod-conmon-03d4eedcd409eb54550586615735e037ade35ddb3390c4dad2b51b1d8b4542b2.scope: Deactivated successfully.
Nov 22 05:37:44 compute-0 sudo[172171]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:44 compute-0 sudo[172317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:44 compute-0 sudo[172317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:44 compute-0 sudo[172317]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:44 compute-0 sudo[172342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:37:44 compute-0 sudo[172342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:44 compute-0 sudo[172342]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:44 compute-0 sudo[172367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:44 compute-0 sudo[172367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:44 compute-0 sudo[172367]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:44 compute-0 sudo[172392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:37:44 compute-0 sudo[172392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.095376408 +0000 UTC m=+0.067792025 container create 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:37:45 compute-0 systemd[1]: Started libpod-conmon-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope.
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.067026272 +0000 UTC m=+0.039441939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.197264822 +0000 UTC m=+0.169680479 container init 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.208302794 +0000 UTC m=+0.180718421 container start 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.213569208 +0000 UTC m=+0.185984885 container attach 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:37:45 compute-0 hardcore_brattain[172474]: 167 167
Nov 22 05:37:45 compute-0 systemd[1]: libpod-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope: Deactivated successfully.
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.215529571 +0000 UTC m=+0.187945198 container died 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b883df1b55ef166ab83b946c9e452777c5185b1d28e2a7079eb04a92ba4499-merged.mount: Deactivated successfully.
Nov 22 05:37:45 compute-0 podman[172459]: 2025-11-22 05:37:45.278253076 +0000 UTC m=+0.250668703 container remove 6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brattain, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:37:45 compute-0 systemd[1]: libpod-conmon-6551023e96ddf4273a79871acdc25ecef639a5f6d85330bb980771bb2025d6ab.scope: Deactivated successfully.
Nov 22 05:37:45 compute-0 ceph-mon[75840]: pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:45 compute-0 podman[172497]: 2025-11-22 05:37:45.524786365 +0000 UTC m=+0.065370687 container create ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:37:45 compute-0 systemd[1]: Started libpod-conmon-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope.
Nov 22 05:37:45 compute-0 podman[172497]: 2025-11-22 05:37:45.495848915 +0000 UTC m=+0.036433307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:37:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:37:45 compute-0 podman[172497]: 2025-11-22 05:37:45.656519886 +0000 UTC m=+0.197104228 container init ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:37:45 compute-0 podman[172497]: 2025-11-22 05:37:45.668565366 +0000 UTC m=+0.209149688 container start ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:37:45 compute-0 podman[172497]: 2025-11-22 05:37:45.673068149 +0000 UTC m=+0.213652531 container attach ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:37:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:46 compute-0 bold_allen[172513]: {
Nov 22 05:37:46 compute-0 bold_allen[172513]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_id": 1,
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "type": "bluestore"
Nov 22 05:37:46 compute-0 bold_allen[172513]:     },
Nov 22 05:37:46 compute-0 bold_allen[172513]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_id": 2,
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "type": "bluestore"
Nov 22 05:37:46 compute-0 bold_allen[172513]:     },
Nov 22 05:37:46 compute-0 bold_allen[172513]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_id": 0,
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:37:46 compute-0 bold_allen[172513]:         "type": "bluestore"
Nov 22 05:37:46 compute-0 bold_allen[172513]:     }
Nov 22 05:37:46 compute-0 bold_allen[172513]: }
Nov 22 05:37:46 compute-0 systemd[1]: libpod-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Deactivated successfully.
Nov 22 05:37:46 compute-0 podman[172497]: 2025-11-22 05:37:46.684411197 +0000 UTC m=+1.224995509 container died ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:37:46 compute-0 systemd[1]: libpod-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Consumed 1.012s CPU time.
Nov 22 05:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f7f3fe00dfc89a604a7a8db88c6984077d4d571ef7ee157ff7ca4f8f0f073de-merged.mount: Deactivated successfully.
Nov 22 05:37:46 compute-0 podman[172497]: 2025-11-22 05:37:46.753607768 +0000 UTC m=+1.294192070 container remove ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:37:46 compute-0 systemd[1]: libpod-conmon-ae737b961e0dd1272421a4eac52d38720c84906580a2534f3f95090e26b7e96a.scope: Deactivated successfully.
Nov 22 05:37:46 compute-0 sudo[172392]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:37:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:37:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:46 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 98c4f8c4-5601-4fc6-b7d6-a32a1f35d0b0 does not exist
Nov 22 05:37:46 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 7d48396e-931c-49f4-b817-6e7c6278b7ef does not exist
Nov 22 05:37:46 compute-0 sudo[172559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:37:46 compute-0 sudo[172559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:46 compute-0 sudo[172559]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:46 compute-0 sudo[172584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:37:46 compute-0 sudo[172584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:37:46 compute-0 sudo[172584]: pam_unix(sudo:session): session closed for user root
Nov 22 05:37:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:47 compute-0 ceph-mon[75840]: pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:37:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:49 compute-0 ceph-mon[75840]: pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:50 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:37:50 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:37:51 compute-0 ceph-mon[75840]: pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:52 compute-0 ceph-mon[75840]: pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:37:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:37:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:55 compute-0 ceph-mon[75840]: pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:57 compute-0 ceph-mon[75840]: pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:37:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:59 compute-0 ceph-mon[75840]: pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:37:59 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:37:59 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:37:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:00 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 22 05:38:00 compute-0 podman[172624]: 2025-11-22 05:38:00.282397849 +0000 UTC m=+0.129377977 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:38:01 compute-0 ceph-mon[75840]: pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:03 compute-0 ceph-mon[75840]: pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:05 compute-0 ceph-mon[75840]: pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:07 compute-0 podman[172650]: 2025-11-22 05:38:07.19400519 +0000 UTC m=+0.058183622 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:38:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:07 compute-0 ceph-mon[75840]: pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:09 compute-0 ceph-mon[75840]: pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:11 compute-0 ceph-mon[75840]: pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:13 compute-0 ceph-mon[75840]: pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:15 compute-0 ceph-mon[75840]: pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.496056) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895496119, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3509508, "memory_usage": 3568896, "flush_reason": "Manual Compaction"}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895524407, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3434386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9727, "largest_seqno": 11762, "table_properties": {"data_size": 3425132, "index_size": 5876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17719, "raw_average_key_size": 19, "raw_value_size": 3406829, "raw_average_value_size": 3735, "num_data_blocks": 267, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789662, "oldest_key_time": 1763789662, "file_creation_time": 1763789895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 28490 microseconds, and 12068 cpu microseconds.
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.524551) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3434386 bytes OK
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.524582) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.526917) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.526980) EVENT_LOG_v1 {"time_micros": 1763789895526969, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.527009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3501029, prev total WAL file size 3501029, number of live WAL files 2.
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.528717) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3353KB)], [26(5905KB)]
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895528796, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9481137, "oldest_snapshot_seqno": -1}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3702 keys, 7789203 bytes, temperature: kUnknown
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895595411, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7789203, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7760907, "index_size": 17946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88958, "raw_average_key_size": 24, "raw_value_size": 7690577, "raw_average_value_size": 2077, "num_data_blocks": 776, "num_entries": 3702, "num_filter_entries": 3702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763789895, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.596141) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7789203 bytes
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.598979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.3 rd, 116.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 4216, records dropped: 514 output_compression: NoCompression
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.599022) EVENT_LOG_v1 {"time_micros": 1763789895599002, "job": 10, "event": "compaction_finished", "compaction_time_micros": 67082, "compaction_time_cpu_micros": 33075, "output_level": 6, "num_output_files": 1, "total_output_size": 7789203, "num_input_records": 4216, "num_output_records": 3702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895601334, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763789895603684, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.528589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:38:15.603960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:38:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:17 compute-0 ceph-mon[75840]: pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:19 compute-0 ceph-mon[75840]: pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:21 compute-0 ceph-mon[75840]: pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:23 compute-0 ceph-mon[75840]: pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:25 compute-0 ceph-mon[75840]: pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:27 compute-0 ceph-mon[75840]: pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:28 compute-0 ceph-mon[75840]: pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:31 compute-0 ceph-mon[75840]: pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:31 compute-0 podman[182232]: 2025-11-22 05:38:31.245102122 +0000 UTC m=+0.097240521 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 05:38:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:32 compute-0 sshd-session[181884]: Invalid user test from 123.253.22.30 port 56388
Nov 22 05:38:32 compute-0 sshd-session[181884]: Connection closed by invalid user test 123.253.22.30 port 56388 [preauth]
Nov 22 05:38:33 compute-0 ceph-mon[75840]: pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:35 compute-0 ceph-mon[75840]: pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.900 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:38:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:38:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:38:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:38:37 compute-0 ceph-mon[75840]: pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:38 compute-0 podman[185583]: 2025-11-22 05:38:38.222209991 +0000 UTC m=+0.077975699 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 05:38:39 compute-0 ceph-mon[75840]: pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:41 compute-0 ceph-mon[75840]: pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:43 compute-0 ceph-mon[75840]: pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:38:43
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', '.rgw.root', 'default.rgw.control']
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:38:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:45 compute-0 ceph-mon[75840]: pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:47 compute-0 sudo[189497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:47 compute-0 sudo[189497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:47 compute-0 sudo[189497]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:47 compute-0 sudo[189522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:38:47 compute-0 sudo[189522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:47 compute-0 sudo[189522]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:47 compute-0 sudo[189547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:47 compute-0 sudo[189547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:47 compute-0 sudo[189547]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:47 compute-0 sudo[189572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:38:47 compute-0 sudo[189572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:47 compute-0 ceph-mon[75840]: pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:47 compute-0 sudo[189572]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 79c0eb9e-2ffe-4af8-9d3f-2088851068fc does not exist
Nov 22 05:38:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 808801b8-29a4-4b3e-84b2-e7e0fdda8e3f does not exist
Nov 22 05:38:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 50373c44-20a6-4e98-b88c-1929a24b1280 does not exist
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:38:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:38:48 compute-0 sudo[189628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:48 compute-0 sudo[189628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:48 compute-0 sudo[189628]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:48 compute-0 sudo[189653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:38:48 compute-0 sudo[189653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:48 compute-0 sudo[189653]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:48 compute-0 sudo[189681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:48 compute-0 sudo[189681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:48 compute-0 sudo[189681]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:48 compute-0 sudo[189706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:38:48 compute-0 sudo[189706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:38:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.753169644 +0000 UTC m=+0.033040128 container create eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:48 compute-0 systemd[1]: Started libpod-conmon-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope.
Nov 22 05:38:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.739752787 +0000 UTC m=+0.019623271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.84574924 +0000 UTC m=+0.125619734 container init eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.8540364 +0000 UTC m=+0.133906924 container start eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:38:48 compute-0 gallant_lamport[189795]: 167 167
Nov 22 05:38:48 compute-0 systemd[1]: libpod-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope: Deactivated successfully.
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.860141422 +0000 UTC m=+0.140011916 container attach eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.860600823 +0000 UTC m=+0.140471317 container died eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b82b2023b18d981ad166553d7a1a36ed2a5344df143f190b90c47b5a8f9463-merged.mount: Deactivated successfully.
Nov 22 05:38:48 compute-0 podman[189779]: 2025-11-22 05:38:48.925757033 +0000 UTC m=+0.205627547 container remove eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:38:48 compute-0 systemd[1]: libpod-conmon-eeaadc5d780892c114eca2f62f2e6e6cffbea9dd4ed1d4e3d84fa2ce6e3e8e01.scope: Deactivated successfully.
Nov 22 05:38:49 compute-0 podman[189819]: 2025-11-22 05:38:49.220095922 +0000 UTC m=+0.103030224 container create ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 05:38:49 compute-0 podman[189819]: 2025-11-22 05:38:49.16159309 +0000 UTC m=+0.044527442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:49 compute-0 systemd[1]: Started libpod-conmon-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope.
Nov 22 05:38:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:49 compute-0 podman[189819]: 2025-11-22 05:38:49.316054758 +0000 UTC m=+0.198989030 container init ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:38:49 compute-0 podman[189819]: 2025-11-22 05:38:49.32255681 +0000 UTC m=+0.205491092 container start ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:38:49 compute-0 podman[189819]: 2025-11-22 05:38:49.326465234 +0000 UTC m=+0.209399516 container attach ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:38:49 compute-0 ceph-mon[75840]: pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:50 compute-0 zen_chaum[189835]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:38:50 compute-0 zen_chaum[189835]: --> relative data size: 1.0
Nov 22 05:38:50 compute-0 zen_chaum[189835]: --> All data devices are unavailable
Nov 22 05:38:50 compute-0 systemd[1]: libpod-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Deactivated successfully.
Nov 22 05:38:50 compute-0 systemd[1]: libpod-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Consumed 1.023s CPU time.
Nov 22 05:38:50 compute-0 podman[189819]: 2025-11-22 05:38:50.402057182 +0000 UTC m=+1.284991524 container died ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c67ddfbd89ed380ee6a5927b0d2d4fe69a762acf1cbfbfc99117ac08c0e770b-merged.mount: Deactivated successfully.
Nov 22 05:38:50 compute-0 podman[189819]: 2025-11-22 05:38:50.482124147 +0000 UTC m=+1.365058409 container remove ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:38:50 compute-0 systemd[1]: libpod-conmon-ffb53b07ae3303ebd6b489dc48942d617757804fd0f4d5337effbfe000257b81.scope: Deactivated successfully.
Nov 22 05:38:50 compute-0 sudo[189706]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:50 compute-0 sudo[189876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:50 compute-0 sudo[189876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:50 compute-0 sudo[189876]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:50 compute-0 sudo[189901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:38:50 compute-0 sudo[189901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:50 compute-0 sudo[189901]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:50 compute-0 sudo[189926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:50 compute-0 sudo[189926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:50 compute-0 sudo[189926]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:50 compute-0 sudo[189951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:38:50 compute-0 sudo[189951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.258004983 +0000 UTC m=+0.051324913 container create 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:38:51 compute-0 systemd[1]: Started libpod-conmon-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope.
Nov 22 05:38:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.237890309 +0000 UTC m=+0.031210269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.350329443 +0000 UTC m=+0.143649383 container init 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.357649387 +0000 UTC m=+0.150969337 container start 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:38:51 compute-0 gifted_darwin[190030]: 167 167
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.362655949 +0000 UTC m=+0.155975889 container attach 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:38:51 compute-0 systemd[1]: libpod-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope: Deactivated successfully.
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.36758603 +0000 UTC m=+0.160905970 container died 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:38:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-216b2e1a86618ec1b1fe3d42b35fef515b835a768dcf055582a3fd790bdfea5d-merged.mount: Deactivated successfully.
Nov 22 05:38:51 compute-0 podman[190014]: 2025-11-22 05:38:51.422014035 +0000 UTC m=+0.215333995 container remove 44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:38:51 compute-0 systemd[1]: libpod-conmon-44a469370bcd26710c81a52c0d39bd772a42424b27fdff013a0708eb56d90695.scope: Deactivated successfully.
Nov 22 05:38:51 compute-0 ceph-mon[75840]: pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:51 compute-0 podman[190054]: 2025-11-22 05:38:51.679885846 +0000 UTC m=+0.064784159 container create d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:38:51 compute-0 systemd[1]: Started libpod-conmon-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope.
Nov 22 05:38:51 compute-0 sshd-session[189976]: Invalid user drone from 80.94.92.166 port 56530
Nov 22 05:38:51 compute-0 podman[190054]: 2025-11-22 05:38:51.654844752 +0000 UTC m=+0.039743055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:51 compute-0 podman[190054]: 2025-11-22 05:38:51.796749028 +0000 UTC m=+0.181647381 container init d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:38:51 compute-0 podman[190054]: 2025-11-22 05:38:51.808106769 +0000 UTC m=+0.193005082 container start d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:38:51 compute-0 podman[190054]: 2025-11-22 05:38:51.81268453 +0000 UTC m=+0.197582883 container attach d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:38:51 compute-0 sshd-session[189976]: Connection closed by invalid user drone 80.94.92.166 port 56530 [preauth]
Nov 22 05:38:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:52 compute-0 funny_chatelet[190071]: {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     "0": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "devices": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "/dev/loop3"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             ],
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_name": "ceph_lv0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_size": "21470642176",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "name": "ceph_lv0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "tags": {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_name": "ceph",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.crush_device_class": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.encrypted": "0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_id": "0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.vdo": "0"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             },
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "vg_name": "ceph_vg0"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         }
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     ],
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     "1": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "devices": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "/dev/loop4"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             ],
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_name": "ceph_lv1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_size": "21470642176",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "name": "ceph_lv1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "tags": {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_name": "ceph",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.crush_device_class": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.encrypted": "0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_id": "1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.vdo": "0"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             },
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "vg_name": "ceph_vg1"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         }
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     ],
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     "2": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "devices": [
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "/dev/loop5"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             ],
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_name": "ceph_lv2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_size": "21470642176",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "name": "ceph_lv2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "tags": {
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.cluster_name": "ceph",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.crush_device_class": "",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.encrypted": "0",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osd_id": "2",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:                 "ceph.vdo": "0"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             },
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "type": "block",
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:             "vg_name": "ceph_vg2"
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:         }
Nov 22 05:38:52 compute-0 funny_chatelet[190071]:     ]
Nov 22 05:38:52 compute-0 funny_chatelet[190071]: }
Nov 22 05:38:52 compute-0 systemd[1]: libpod-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope: Deactivated successfully.
Nov 22 05:38:52 compute-0 podman[190054]: 2025-11-22 05:38:52.63619675 +0000 UTC m=+1.021095063 container died d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:38:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5abb327eb7074db500a43dbb26c1ea79f6d11e43e239c12a520b6d63a084ba7-merged.mount: Deactivated successfully.
Nov 22 05:38:52 compute-0 podman[190054]: 2025-11-22 05:38:52.877982545 +0000 UTC m=+1.262880828 container remove d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:38:52 compute-0 systemd[1]: libpod-conmon-d290358ba692e97d37dabe91d14136d51a240ba156a968acb0248d7028642922.scope: Deactivated successfully.
Nov 22 05:38:52 compute-0 sudo[189951]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:53 compute-0 sudo[190094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:53 compute-0 sudo[190094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:53 compute-0 sudo[190094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:53 compute-0 sudo[190119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:38:53 compute-0 sudo[190119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:53 compute-0 sudo[190119]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:53 compute-0 sudo[190144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:53 compute-0 sudo[190144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:53 compute-0 sudo[190144]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:53 compute-0 sudo[190169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:38:53 compute-0 sudo[190169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:53 compute-0 ceph-mon[75840]: pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:53 compute-0 podman[190232]: 2025-11-22 05:38:53.68663937 +0000 UTC m=+0.065036746 container create 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:38:53 compute-0 systemd[1]: Started libpod-conmon-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope.
Nov 22 05:38:53 compute-0 podman[190232]: 2025-11-22 05:38:53.660010484 +0000 UTC m=+0.038407920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:53 compute-0 podman[190232]: 2025-11-22 05:38:53.783782588 +0000 UTC m=+0.162180024 container init 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:38:53 compute-0 podman[190232]: 2025-11-22 05:38:53.78987408 +0000 UTC m=+0.168271466 container start 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:38:53 compute-0 podman[190232]: 2025-11-22 05:38:53.793718502 +0000 UTC m=+0.172115888 container attach 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:38:53 compute-0 gracious_golick[190249]: 167 167
Nov 22 05:38:53 compute-0 systemd[1]: libpod-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope: Deactivated successfully.
Nov 22 05:38:53 compute-0 podman[190254]: 2025-11-22 05:38:53.841231113 +0000 UTC m=+0.029402721 container died 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fcacc5891067ec4aa0261990da4f36bb23fe9bcd8234e9e9036c93d72577640-merged.mount: Deactivated successfully.
Nov 22 05:38:53 compute-0 podman[190254]: 2025-11-22 05:38:53.890664954 +0000 UTC m=+0.078836562 container remove 5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:38:53 compute-0 systemd[1]: libpod-conmon-5d74422a2c23d37f36ec7781f35ecc06298680472d789871273968503ac614e4.scope: Deactivated successfully.
Nov 22 05:38:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:54 compute-0 podman[190276]: 2025-11-22 05:38:54.114123713 +0000 UTC m=+0.039493679 container create b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:38:54 compute-0 systemd[1]: Started libpod-conmon-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope.
Nov 22 05:38:54 compute-0 podman[190276]: 2025-11-22 05:38:54.096027983 +0000 UTC m=+0.021397999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:38:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:38:54 compute-0 podman[190276]: 2025-11-22 05:38:54.23159694 +0000 UTC m=+0.156966926 container init b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:38:54 compute-0 podman[190276]: 2025-11-22 05:38:54.243307371 +0000 UTC m=+0.168677367 container start b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:54 compute-0 podman[190276]: 2025-11-22 05:38:54.247982804 +0000 UTC m=+0.173352800 container attach b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:38:55 compute-0 objective_mahavira[190293]: {
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_id": 1,
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "type": "bluestore"
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     },
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_id": 2,
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "type": "bluestore"
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     },
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_id": 0,
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:         "type": "bluestore"
Nov 22 05:38:55 compute-0 objective_mahavira[190293]:     }
Nov 22 05:38:55 compute-0 objective_mahavira[190293]: }
Nov 22 05:38:55 compute-0 systemd[1]: libpod-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Deactivated successfully.
Nov 22 05:38:55 compute-0 podman[190276]: 2025-11-22 05:38:55.350411035 +0000 UTC m=+1.275781051 container died b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:38:55 compute-0 systemd[1]: libpod-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Consumed 1.112s CPU time.
Nov 22 05:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-27fd8d6e8905aae8ae4e5db117be1c5192348c3e00652ff933a432fdbb00a339-merged.mount: Deactivated successfully.
Nov 22 05:38:55 compute-0 podman[190276]: 2025-11-22 05:38:55.438351178 +0000 UTC m=+1.363721184 container remove b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mahavira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:38:55 compute-0 systemd[1]: libpod-conmon-b0e7b8878dd3cb3a91f6815af6e37b4f20b5724f4178b1afc0edb37a8adc94c5.scope: Deactivated successfully.
Nov 22 05:38:55 compute-0 sudo[190169]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:38:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:38:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:55 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c5e97d09-6ec3-4210-b55e-7dc06dde093f does not exist
Nov 22 05:38:55 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6dc0bf76-fb4e-4516-942d-d5a5d68f1bde does not exist
Nov 22 05:38:55 compute-0 ceph-mon[75840]: pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:38:55 compute-0 sudo[190340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:38:55 compute-0 sudo[190340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:55 compute-0 sudo[190340]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:55 compute-0 sudo[190365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:38:55 compute-0 sudo[190365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:38:55 compute-0 sudo[190365]: pam_unix(sudo:session): session closed for user root
Nov 22 05:38:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:38:57 compute-0 ceph-mon[75840]: pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:58 compute-0 ceph-mon[75840]: pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:38:58 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 05:38:58 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 05:38:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:00 compute-0 groupadd[190402]: group added to /etc/group: name=dnsmasq, GID=991
Nov 22 05:39:00 compute-0 groupadd[190402]: group added to /etc/gshadow: name=dnsmasq
Nov 22 05:39:00 compute-0 groupadd[190402]: new group: name=dnsmasq, GID=991
Nov 22 05:39:00 compute-0 useradd[190409]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 22 05:39:00 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:39:00 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 22 05:39:00 compute-0 dbus-broker-launch[757]: Noticed file-system modification, trigger reload.
Nov 22 05:39:01 compute-0 ceph-mon[75840]: pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:01 compute-0 groupadd[190422]: group added to /etc/group: name=clevis, GID=990
Nov 22 05:39:01 compute-0 groupadd[190422]: group added to /etc/gshadow: name=clevis
Nov 22 05:39:01 compute-0 groupadd[190422]: new group: name=clevis, GID=990
Nov 22 05:39:01 compute-0 useradd[190429]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 22 05:39:01 compute-0 usermod[190458]: add 'clevis' to group 'tss'
Nov 22 05:39:01 compute-0 usermod[190458]: add 'clevis' to shadow group 'tss'
Nov 22 05:39:01 compute-0 podman[190430]: 2025-11-22 05:39:01.396941784 +0000 UTC m=+0.110263877 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 22 05:39:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:03 compute-0 ceph-mon[75840]: pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:04 compute-0 polkitd[44246]: Reloading rules
Nov 22 05:39:04 compute-0 polkitd[44246]: Collecting garbage unconditionally...
Nov 22 05:39:04 compute-0 polkitd[44246]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 05:39:04 compute-0 polkitd[44246]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 05:39:04 compute-0 polkitd[44246]: Finished loading, compiling and executing 3 rules
Nov 22 05:39:04 compute-0 polkitd[44246]: Reloading rules
Nov 22 05:39:04 compute-0 polkitd[44246]: Collecting garbage unconditionally...
Nov 22 05:39:04 compute-0 polkitd[44246]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 05:39:04 compute-0 polkitd[44246]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 05:39:04 compute-0 polkitd[44246]: Finished loading, compiling and executing 3 rules
Nov 22 05:39:05 compute-0 ceph-mon[75840]: pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:06 compute-0 groupadd[190652]: group added to /etc/group: name=ceph, GID=167
Nov 22 05:39:06 compute-0 groupadd[190652]: group added to /etc/gshadow: name=ceph
Nov 22 05:39:06 compute-0 groupadd[190652]: new group: name=ceph, GID=167
Nov 22 05:39:06 compute-0 useradd[190658]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 22 05:39:07 compute-0 ceph-mon[75840]: pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:09 compute-0 podman[191204]: 2025-11-22 05:39:09.221252061 +0000 UTC m=+0.079025167 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 05:39:09 compute-0 ceph-mon[75840]: pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:09 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 22 05:39:09 compute-0 sshd[1006]: Received signal 15; terminating.
Nov 22 05:39:09 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 22 05:39:09 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 22 05:39:09 compute-0 systemd[1]: sshd.service: Consumed 19.560s CPU time, read 32.0K from disk, written 124.0K to disk.
Nov 22 05:39:09 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 22 05:39:09 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 22 05:39:09 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 05:39:09 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 05:39:09 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 05:39:09 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 22 05:39:09 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 22 05:39:09 compute-0 sshd[191303]: Server listening on 0.0.0.0 port 22.
Nov 22 05:39:09 compute-0 sshd[191303]: Server listening on :: port 22.
Nov 22 05:39:09 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 22 05:39:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:11 compute-0 ceph-mon[75840]: pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:12 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:39:12 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:39:12 compute-0 systemd[1]: Reloading.
Nov 22 05:39:12 compute-0 systemd-sysv-generator[191561]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:12 compute-0 systemd-rc-local-generator[191552]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:12 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:14 compute-0 ceph-mon[75840]: pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:15 compute-0 ceph-mon[75840]: pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:16 compute-0 sudo[171154]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:17 compute-0 sudo[195685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thrywrncdfzchpbhmhwsiitfdtjjqtwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789956.508801-336-123764274116218/AnsiballZ_systemd.py'
Nov 22 05:39:17 compute-0 sudo[195685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:17 compute-0 ceph-mon[75840]: pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:17 compute-0 python3.9[195715]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:39:17 compute-0 systemd[1]: Reloading.
Nov 22 05:39:17 compute-0 systemd-sysv-generator[196067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:17 compute-0 systemd-rc-local-generator[196062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:17 compute-0 sudo[195685]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:18 compute-0 sudo[196779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvslzownhxldattpklodglixgwgxxcvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789958.098409-336-128805302630973/AnsiballZ_systemd.py'
Nov 22 05:39:18 compute-0 sudo[196779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:18 compute-0 python3.9[196800]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:39:18 compute-0 systemd[1]: Reloading.
Nov 22 05:39:18 compute-0 systemd-sysv-generator[197256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:18 compute-0 systemd-rc-local-generator[197247]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:19 compute-0 sudo[196779]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:19 compute-0 ceph-mon[75840]: pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:19 compute-0 sudo[198003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjcoqnhtklbiwyhzahglwmnidlkfbhaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789959.3101983-336-105034970499505/AnsiballZ_systemd.py'
Nov 22 05:39:19 compute-0 sudo[198003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:19 compute-0 python3.9[198039]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:39:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:20 compute-0 systemd[1]: Reloading.
Nov 22 05:39:20 compute-0 systemd-sysv-generator[198428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:20 compute-0 systemd-rc-local-generator[198424]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:20 compute-0 sudo[198003]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:20 compute-0 sudo[199215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgskjbxbwcuxmjqniadgbgaygwgyntr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789960.5493705-336-260545772029866/AnsiballZ_systemd.py'
Nov 22 05:39:20 compute-0 sudo[199215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:21 compute-0 python3.9[199233]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:39:21 compute-0 systemd[1]: Reloading.
Nov 22 05:39:21 compute-0 systemd-rc-local-generator[199635]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:21 compute-0 systemd-sysv-generator[199641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:21 compute-0 ceph-mon[75840]: pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:21 compute-0 sudo[199215]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:22 compute-0 sudo[200390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfyrnkvvrgrljcdgtujiciumrirwruub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789961.791434-365-110955471823308/AnsiballZ_systemd.py'
Nov 22 05:39:22 compute-0 sudo[200390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:22 compute-0 python3.9[200413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:22 compute-0 systemd[1]: Reloading.
Nov 22 05:39:22 compute-0 systemd-rc-local-generator[200852]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:22 compute-0 systemd-sysv-generator[200857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:39:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:39:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 13.204s CPU time.
Nov 22 05:39:22 compute-0 systemd[1]: run-rda1824e87d764b0fac846f34253abce5.service: Deactivated successfully.
Nov 22 05:39:22 compute-0 sudo[200390]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:23 compute-0 ceph-mon[75840]: pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:23 compute-0 sudo[201040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjypbpbosxhcrwiembahvbkwqrcikxsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789963.1246595-365-210865665989964/AnsiballZ_systemd.py'
Nov 22 05:39:23 compute-0 sudo[201040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:23 compute-0 python3.9[201042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:23 compute-0 systemd[1]: Reloading.
Nov 22 05:39:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:24 compute-0 systemd-rc-local-generator[201070]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:24 compute-0 systemd-sysv-generator[201076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:24 compute-0 sudo[201040]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:24 compute-0 sudo[201230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zobzshonbjeqvdaxjmhfhpvrjapqutqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789964.4780543-365-213898834399305/AnsiballZ_systemd.py'
Nov 22 05:39:24 compute-0 sudo[201230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:25 compute-0 python3.9[201232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:25 compute-0 systemd[1]: Reloading.
Nov 22 05:39:25 compute-0 systemd-rc-local-generator[201263]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:25 compute-0 systemd-sysv-generator[201266]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:25 compute-0 ceph-mon[75840]: pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:25 compute-0 sudo[201230]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:26 compute-0 sudo[201420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqqrjhnnrbosjltcuwctnrqxcxmlpvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789965.8798685-365-13686071074969/AnsiballZ_systemd.py'
Nov 22 05:39:26 compute-0 sudo[201420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:26 compute-0 python3.9[201422]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:26 compute-0 sudo[201420]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:27 compute-0 sudo[201575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogpmckkscochcjrrgwtrjbgkajecwkwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789966.8815615-365-224916238750218/AnsiballZ_systemd.py'
Nov 22 05:39:27 compute-0 sudo[201575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:27 compute-0 ceph-mon[75840]: pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:27 compute-0 python3.9[201577]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:27 compute-0 systemd[1]: Reloading.
Nov 22 05:39:27 compute-0 systemd-rc-local-generator[201608]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:27 compute-0 systemd-sysv-generator[201613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:27 compute-0 sudo[201575]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:28 compute-0 sudo[201765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsuuujxlwgvxtwnlnojjnswratxhuova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789968.1883163-401-279551296566035/AnsiballZ_systemd.py'
Nov 22 05:39:28 compute-0 sudo[201765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:28 compute-0 python3.9[201767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 05:39:28 compute-0 systemd[1]: Reloading.
Nov 22 05:39:28 compute-0 systemd-rc-local-generator[201798]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:39:28 compute-0 systemd-sysv-generator[201801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:39:29 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 22 05:39:29 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 22 05:39:29 compute-0 sudo[201765]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:29 compute-0 ceph-mon[75840]: pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:30 compute-0 sudo[201958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpxfzggdvrzegshnsiikqbkzjnotscva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789969.5016775-409-47992969542293/AnsiballZ_systemd.py'
Nov 22 05:39:30 compute-0 sudo[201958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:30 compute-0 python3.9[201960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:30 compute-0 sudo[201958]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:30 compute-0 sudo[202113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvplpehdcftshbxptzamuhximyqmttf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789970.5783288-409-145962827692455/AnsiballZ_systemd.py'
Nov 22 05:39:30 compute-0 sudo[202113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:31 compute-0 python3.9[202115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:31 compute-0 sudo[202113]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:31 compute-0 ceph-mon[75840]: pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:31 compute-0 sudo[202285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzzbibbvpjdimhtyqmgvmuyvfxxmqmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789971.3717887-409-58492556237390/AnsiballZ_systemd.py'
Nov 22 05:39:31 compute-0 sudo[202285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:31 compute-0 podman[202242]: 2025-11-22 05:39:31.790168796 +0000 UTC m=+0.118253818 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 05:39:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:32 compute-0 python3.9[202289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:33 compute-0 sudo[202285]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:33 compute-0 ceph-mon[75840]: pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:33 compute-0 sudo[202447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqjuaybfwrqslmyejzltdsrjejlmdbgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789973.3862166-409-266415030035152/AnsiballZ_systemd.py'
Nov 22 05:39:33 compute-0 sudo[202447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:34 compute-0 python3.9[202449]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:34 compute-0 sudo[202447]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:34 compute-0 sudo[202602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaqbqjhjtdcjgfwwsiskwbfvzlrherpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789974.3230877-409-50911335655385/AnsiballZ_systemd.py'
Nov 22 05:39:34 compute-0 sudo[202602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:35 compute-0 python3.9[202604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:35 compute-0 sudo[202602]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:35 compute-0 ceph-mon[75840]: pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:35 compute-0 sudo[202757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbsqyekjjcmlnvqjikelboqlwcekuhxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789975.2697773-409-141478494247982/AnsiballZ_systemd.py'
Nov 22 05:39:35 compute-0 sudo[202757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:35 compute-0 python3.9[202759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:36 compute-0 sudo[202757]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:36 compute-0 sudo[202912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvsrzpuddyrzgyovzfgvxcpaecjbvlgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789976.1657913-409-137520135051815/AnsiballZ_systemd.py'
Nov 22 05:39:36 compute-0 sudo[202912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:36 compute-0 ceph-mon[75840]: pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:36 compute-0 python3.9[202914]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:36 compute-0 sudo[202912]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.901 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:39:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.902 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:39:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:39:36.902 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:39:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:37 compute-0 sudo[203067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbxrxyjbzkfhygvcyicggdfiqfellyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789977.0628123-409-31178223689627/AnsiballZ_systemd.py'
Nov 22 05:39:37 compute-0 sudo[203067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:37 compute-0 python3.9[203069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:37 compute-0 sudo[203067]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:38 compute-0 sudo[203222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oamnxzfwttsydsoyrdkqjxucjzykvpin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789977.9489856-409-230645357694047/AnsiballZ_systemd.py'
Nov 22 05:39:38 compute-0 sudo[203222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:38 compute-0 python3.9[203224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:38 compute-0 sudo[203222]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:39 compute-0 ceph-mon[75840]: pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:39 compute-0 sudo[203377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktqmrfqiuiwfudvvigexbskdemzoliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789978.770123-409-169430631760462/AnsiballZ_systemd.py'
Nov 22 05:39:39 compute-0 sudo[203377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:39 compute-0 python3.9[203379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:39 compute-0 podman[203381]: 2025-11-22 05:39:39.553633853 +0000 UTC m=+0.083888305 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 05:39:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:40 compute-0 sudo[203377]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:41 compute-0 ceph-mon[75840]: pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:41 compute-0 sudo[203551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eekmhzdmyvsugjyjwughrjvvtldbzwlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789980.763684-409-118853719484706/AnsiballZ_systemd.py'
Nov 22 05:39:41 compute-0 sudo[203551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:41 compute-0 python3.9[203553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:41 compute-0 sudo[203551]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:42 compute-0 sudo[203706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udwabkbakqrnkxzpgqsszkcxgvrolmai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789981.823869-409-15222860454374/AnsiballZ_systemd.py'
Nov 22 05:39:42 compute-0 sudo[203706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:42 compute-0 python3.9[203708]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:43 compute-0 ceph-mon[75840]: pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:43 compute-0 sudo[203706]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:39:43
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'images']
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:39:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:39:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:44 compute-0 sudo[203861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnvhnfyxeputqdexukozmefwgfolfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789983.7197566-409-194147689486146/AnsiballZ_systemd.py'
Nov 22 05:39:44 compute-0 sudo[203861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:44 compute-0 python3.9[203863]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:44 compute-0 sudo[203861]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:45 compute-0 sudo[204016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzxckdgkjpzepkdseqtdlkahmttcudxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789984.711998-409-183766175721532/AnsiballZ_systemd.py'
Nov 22 05:39:45 compute-0 sudo[204016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:45 compute-0 ceph-mon[75840]: pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:45 compute-0 python3.9[204018]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 05:39:45 compute-0 sudo[204016]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:46 compute-0 sudo[204171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqchvuycwqbpynlqhoawdvqadmeverxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789985.9838042-511-19607288177343/AnsiballZ_file.py'
Nov 22 05:39:46 compute-0 sudo[204171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:46 compute-0 python3.9[204173]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:46 compute-0 sudo[204171]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:47 compute-0 sudo[204323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluskppfqjwrsjoyofwgdtrxhwuwnyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789986.7581382-511-163661078346627/AnsiballZ_file.py'
Nov 22 05:39:47 compute-0 sudo[204323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:47 compute-0 ceph-mon[75840]: pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:47 compute-0 python3.9[204325]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:47 compute-0 sudo[204323]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:47 compute-0 sudo[204475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihuskhpvogwzkgdrdhjvbagyemmdkli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789987.4843154-511-216773833323327/AnsiballZ_file.py'
Nov 22 05:39:47 compute-0 sudo[204475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:48 compute-0 python3.9[204477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:48 compute-0 sudo[204475]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:48 compute-0 sudo[204627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtmohugtnpxunikjwxqehkltyjfeogcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789988.450301-511-60116508356835/AnsiballZ_file.py'
Nov 22 05:39:48 compute-0 sudo[204627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:49 compute-0 python3.9[204629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:49 compute-0 sudo[204627]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:49 compute-0 ceph-mon[75840]: pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:50 compute-0 sudo[204779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fctqxzurjpcspwixcsjxnjcwzrgdhbzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789989.8380442-511-66148700950476/AnsiballZ_file.py'
Nov 22 05:39:50 compute-0 sudo[204779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:50 compute-0 python3.9[204781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:50 compute-0 sudo[204779]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:51 compute-0 ceph-mon[75840]: pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:51 compute-0 sudo[204931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avsaimksxxtmpxffzynjnclhldwvuzon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789990.6786509-511-208030028789301/AnsiballZ_file.py'
Nov 22 05:39:51 compute-0 sudo[204931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:52 compute-0 python3.9[204933]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:39:52 compute-0 sudo[204931]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:39:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:39:52 compute-0 sudo[205083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifvyfvxmstpiinxwvwvvipudaovenjjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789992.4186914-554-69885207164514/AnsiballZ_stat.py'
Nov 22 05:39:52 compute-0 sudo[205083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:53 compute-0 python3.9[205085]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:39:53 compute-0 sudo[205083]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:53 compute-0 ceph-mon[75840]: pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:53 compute-0 sudo[205208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tircfqshnqdenhnhlthnydgygrbppdwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789992.4186914-554-69885207164514/AnsiballZ_copy.py'
Nov 22 05:39:53 compute-0 sudo[205208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:54 compute-0 python3.9[205210]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789992.4186914-554-69885207164514/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:39:54 compute-0 sudo[205208]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:54 compute-0 sudo[205360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mseqtjhqrftjytxzgbiawtlgehykecgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789994.3001108-554-182333866704513/AnsiballZ_stat.py'
Nov 22 05:39:54 compute-0 sudo[205360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:54 compute-0 python3.9[205362]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:39:54 compute-0 sudo[205360]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:55 compute-0 sudo[205485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcatfsnaarhddbjlyybtgyiggbxumcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789994.3001108-554-182333866704513/AnsiballZ_copy.py'
Nov 22 05:39:55 compute-0 sudo[205485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:55 compute-0 ceph-mon[75840]: pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:55 compute-0 python3.9[205487]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789994.3001108-554-182333866704513/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:39:55 compute-0 sudo[205485]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:55 compute-0 sudo[205488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:55 compute-0 sudo[205488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:55 compute-0 sudo[205488]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:55 compute-0 sudo[205537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:39:55 compute-0 sudo[205537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:55 compute-0 sudo[205537]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:55 compute-0 sudo[205585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:55 compute-0 sudo[205585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:55 compute-0 sudo[205585]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:56 compute-0 sudo[205639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:39:56 compute-0 sudo[205639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:56 compute-0 sudo[205752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxgcwdbbznqvjgwxoxupsiumkvqwfqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789995.9056485-554-209219980484898/AnsiballZ_stat.py'
Nov 22 05:39:56 compute-0 sudo[205752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:56 compute-0 python3.9[205754]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:39:56 compute-0 sudo[205752]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:56 compute-0 sudo[205639]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:56 compute-0 ceph-mon[75840]: pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:39:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 87800006-9d50-4a91-a01d-143edbad4855 does not exist
Nov 22 05:39:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4d3a473f-79ee-45f8-8767-8e938461cfa4 does not exist
Nov 22 05:39:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e115d2b2-7d23-415f-aea5-49bdc1b47c6a does not exist
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:39:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:39:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:39:56 compute-0 sudo[205828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:56 compute-0 sudo[205828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:56 compute-0 sudo[205828]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:56 compute-0 sudo[205869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:39:56 compute-0 sudo[205869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:56 compute-0 sudo[205869]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:57 compute-0 sudo[205962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovkhjnpmjvzpgfydvhihwlyqcmbwlcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789995.9056485-554-209219980484898/AnsiballZ_copy.py'
Nov 22 05:39:57 compute-0 sudo[205962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:57 compute-0 sudo[205927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:57 compute-0 sudo[205927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:57 compute-0 sudo[205927]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:57 compute-0 sudo[205972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:39:57 compute-0 sudo[205972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:57 compute-0 python3.9[205969]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789995.9056485-554-209219980484898/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:39:57 compute-0 sudo[205962]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.558628703 +0000 UTC m=+0.076003553 container create 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:39:57 compute-0 systemd[1]: Started libpod-conmon-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope.
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.524746003 +0000 UTC m=+0.042120903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:39:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.684106614 +0000 UTC m=+0.201481524 container init 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:39:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.693023913 +0000 UTC m=+0.210398763 container start 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.698185152 +0000 UTC m=+0.215559992 container attach 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:39:57 compute-0 sharp_tesla[206152]: 167 167
Nov 22 05:39:57 compute-0 systemd[1]: libpod-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope: Deactivated successfully.
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.70178989 +0000 UTC m=+0.219164710 container died 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-65130f7d2423741b9c440d12774ba1310ee9fcdd93ec899fc6eb991d72aaeca0-merged.mount: Deactivated successfully.
Nov 22 05:39:57 compute-0 podman[206102]: 2025-11-22 05:39:57.756121949 +0000 UTC m=+0.273496769 container remove 693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_tesla, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:39:57 compute-0 systemd[1]: libpod-conmon-693da294e35d65b9e8576e7c58cc7b629b69595ed9b0db4dd06380df265570d4.scope: Deactivated successfully.
Nov 22 05:39:57 compute-0 sudo[206220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qphvhmjgekfyjiforgyxzkkkptfgpqck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789997.4145343-554-275584828976283/AnsiballZ_stat.py'
Nov 22 05:39:57 compute-0 sudo[206220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:57 compute-0 podman[206228]: 2025-11-22 05:39:57.955022233 +0000 UTC m=+0.052470881 container create 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:39:58 compute-0 systemd[1]: Started libpod-conmon-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope.
Nov 22 05:39:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:58 compute-0 podman[206228]: 2025-11-22 05:39:57.937176563 +0000 UTC m=+0.034625231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:39:58 compute-0 python3.9[206222]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:39:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:39:58 compute-0 podman[206228]: 2025-11-22 05:39:58.063351443 +0000 UTC m=+0.160800101 container init 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:39:58 compute-0 podman[206228]: 2025-11-22 05:39:58.077630757 +0000 UTC m=+0.175079415 container start 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:39:58 compute-0 podman[206228]: 2025-11-22 05:39:58.085706874 +0000 UTC m=+0.183155542 container attach 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:39:58 compute-0 sudo[206220]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:58 compute-0 sudo[206371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwxbdwdsqjpvgbroinalpalbluzxwiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789997.4145343-554-275584828976283/AnsiballZ_copy.py'
Nov 22 05:39:58 compute-0 sudo[206371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:58 compute-0 ceph-mon[75840]: pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:39:58 compute-0 python3.9[206373]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789997.4145343-554-275584828976283/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:39:58 compute-0 sudo[206371]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:59 compute-0 strange_carson[206244]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:39:59 compute-0 strange_carson[206244]: --> relative data size: 1.0
Nov 22 05:39:59 compute-0 strange_carson[206244]: --> All data devices are unavailable
Nov 22 05:39:59 compute-0 systemd[1]: libpod-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Deactivated successfully.
Nov 22 05:39:59 compute-0 podman[206228]: 2025-11-22 05:39:59.265981874 +0000 UTC m=+1.363430542 container died 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:39:59 compute-0 systemd[1]: libpod-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Consumed 1.102s CPU time.
Nov 22 05:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2ff32bf6c4a5481a15110b5d8c73dfc7e908657e1044926faa6db1028a3351e-merged.mount: Deactivated successfully.
Nov 22 05:39:59 compute-0 podman[206228]: 2025-11-22 05:39:59.347219386 +0000 UTC m=+1.444668024 container remove 2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_carson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:39:59 compute-0 systemd[1]: libpod-conmon-2680a80b5511b79aa6131d9eb360738ab2ab1caaa675ca1b4b994c1aca422654.scope: Deactivated successfully.
Nov 22 05:39:59 compute-0 sudo[205972]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:59 compute-0 sudo[206564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbdpduitmpckizdulkmyrbtwbjfaplcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789999.0140114-554-10862158167396/AnsiballZ_stat.py'
Nov 22 05:39:59 compute-0 sudo[206564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:39:59 compute-0 sudo[206557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:59 compute-0 sudo[206557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:59 compute-0 sudo[206557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:59 compute-0 sudo[206587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:39:59 compute-0 sudo[206587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:59 compute-0 sudo[206587]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:59 compute-0 sudo[206612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:39:59 compute-0 sudo[206612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:59 compute-0 sudo[206612]: pam_unix(sudo:session): session closed for user root
Nov 22 05:39:59 compute-0 python3.9[206584]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:39:59 compute-0 sudo[206637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:39:59 compute-0 sudo[206637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:39:59 compute-0 sudo[206564]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.121226061 +0000 UTC m=+0.070279009 container create 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:40:00 compute-0 systemd[1]: Started libpod-conmon-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope.
Nov 22 05:40:00 compute-0 sudo[206841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhogsqpjpiemdzeoyrxwlcxgpdzayha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763789999.0140114-554-10862158167396/AnsiballZ_copy.py'
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.094330878 +0000 UTC m=+0.043383877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:40:00 compute-0 sudo[206841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.214675821 +0000 UTC m=+0.163728839 container init 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.22541792 +0000 UTC m=+0.174470848 container start 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.229260563 +0000 UTC m=+0.178313531 container attach 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:40:00 compute-0 systemd[1]: libpod-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope: Deactivated successfully.
Nov 22 05:40:00 compute-0 awesome_ellis[206842]: 167 167
Nov 22 05:40:00 compute-0 conmon[206842]: conmon 4828c08248a96a2ca63e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope/container/memory.events
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.232954923 +0000 UTC m=+0.182007881 container died 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd3111d375082a87dba417ba142cce0ad385fda20380928c6c27e7ceb6a7a2b-merged.mount: Deactivated successfully.
Nov 22 05:40:00 compute-0 podman[206782]: 2025-11-22 05:40:00.28721565 +0000 UTC m=+0.236268578 container remove 4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_ellis, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:40:00 compute-0 systemd[1]: libpod-conmon-4828c08248a96a2ca63ed3509a4b62cac69c955658ed4e31d73262ed1f660808.scope: Deactivated successfully.
Nov 22 05:40:00 compute-0 python3.9[206846]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763789999.0140114-554-10862158167396/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:00 compute-0 sudo[206841]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:00 compute-0 podman[206867]: 2025-11-22 05:40:00.504044306 +0000 UTC m=+0.049443179 container create c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:40:00 compute-0 systemd[1]: Started libpod-conmon-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope.
Nov 22 05:40:00 compute-0 podman[206867]: 2025-11-22 05:40:00.477669847 +0000 UTC m=+0.023068800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:00 compute-0 podman[206867]: 2025-11-22 05:40:00.626587069 +0000 UTC m=+0.171985972 container init c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:40:00 compute-0 podman[206867]: 2025-11-22 05:40:00.640068 +0000 UTC m=+0.185466893 container start c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:40:00 compute-0 podman[206867]: 2025-11-22 05:40:00.645499606 +0000 UTC m=+0.190898519 container attach c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:40:01 compute-0 sudo[207037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-redaaystzsmepydzwasxvncggdaysggh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790000.6305976-554-248235703689124/AnsiballZ_stat.py'
Nov 22 05:40:01 compute-0 sudo[207037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:01 compute-0 ceph-mon[75840]: pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:01 compute-0 python3.9[207039]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:01 compute-0 sudo[207037]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]: {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     "0": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "devices": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "/dev/loop3"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             ],
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_name": "ceph_lv0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_size": "21470642176",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "name": "ceph_lv0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "tags": {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_name": "ceph",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.crush_device_class": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.encrypted": "0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_id": "0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.vdo": "0"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             },
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "vg_name": "ceph_vg0"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         }
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     ],
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     "1": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "devices": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "/dev/loop4"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             ],
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_name": "ceph_lv1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_size": "21470642176",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "name": "ceph_lv1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "tags": {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_name": "ceph",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.crush_device_class": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.encrypted": "0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_id": "1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.vdo": "0"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             },
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "vg_name": "ceph_vg1"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         }
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     ],
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     "2": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "devices": [
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "/dev/loop5"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             ],
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_name": "ceph_lv2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_size": "21470642176",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "name": "ceph_lv2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "tags": {
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.cluster_name": "ceph",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.crush_device_class": "",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.encrypted": "0",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osd_id": "2",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:                 "ceph.vdo": "0"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             },
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "type": "block",
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:             "vg_name": "ceph_vg2"
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:         }
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]:     ]
Nov 22 05:40:01 compute-0 adoring_sanderson[206907]: }
Nov 22 05:40:01 compute-0 systemd[1]: libpod-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope: Deactivated successfully.
Nov 22 05:40:01 compute-0 podman[206867]: 2025-11-22 05:40:01.427465365 +0000 UTC m=+0.972864268 container died c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-64af534e92309ba1d0704f1784b80d487f0bad103eb294d348e526a723503d98-merged.mount: Deactivated successfully.
Nov 22 05:40:01 compute-0 podman[206867]: 2025-11-22 05:40:01.502566872 +0000 UTC m=+1.047965775 container remove c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:40:01 compute-0 systemd[1]: libpod-conmon-c17704fab628954c8eac4ec780c99722d12fd6a67ef3460a60a6811b347ce08e.scope: Deactivated successfully.
Nov 22 05:40:01 compute-0 sudo[206637]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:01 compute-0 sudo[207111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:40:01 compute-0 sudo[207111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:01 compute-0 sudo[207111]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:01 compute-0 sudo[207160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:40:01 compute-0 sudo[207160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:01 compute-0 sudo[207160]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:01 compute-0 sudo[207207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:40:01 compute-0 sudo[207207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:01 compute-0 sudo[207207]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:01 compute-0 sudo[207253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faqomgpcqkdsjxcyyqooisijpfmjnepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790000.6305976-554-248235703689124/AnsiballZ_copy.py'
Nov 22 05:40:01 compute-0 sudo[207253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:01 compute-0 sudo[207257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:40:01 compute-0 sudo[207257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:02 compute-0 podman[207282]: 2025-11-22 05:40:02.000779318 +0000 UTC m=+0.115897385 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 05:40:02 compute-0 python3.9[207259]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790000.6305976-554-248235703689124/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:02 compute-0 sudo[207253]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.229345868 +0000 UTC m=+0.044541287 container create 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:40:02 compute-0 systemd[1]: Started libpod-conmon-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope.
Nov 22 05:40:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.207137471 +0000 UTC m=+0.022332950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.30828371 +0000 UTC m=+0.123479229 container init 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.320930779 +0000 UTC m=+0.136126228 container start 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.324821044 +0000 UTC m=+0.140016513 container attach 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:40:02 compute-0 interesting_hertz[207438]: 167 167
Nov 22 05:40:02 compute-0 systemd[1]: libpod-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope: Deactivated successfully.
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.330200748 +0000 UTC m=+0.145396207 container died 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 05:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-64144b63db691be2d585bb1921ee1fec574680a4ce6a86448f2c385db5273688-merged.mount: Deactivated successfully.
Nov 22 05:40:02 compute-0 podman[207393]: 2025-11-22 05:40:02.380187251 +0000 UTC m=+0.195382710 container remove 6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:40:02 compute-0 systemd[1]: libpod-conmon-6a82b1ee1e080f534187450c989b77db20a7eeebfd40d004b0de8e67c34dd6f5.scope: Deactivated successfully.
Nov 22 05:40:02 compute-0 sudo[207528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mumowlelhottxtpcjbxsmndtnmpdkrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790002.1776078-554-159655940205728/AnsiballZ_stat.py'
Nov 22 05:40:02 compute-0 sudo[207528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:02 compute-0 podman[207536]: 2025-11-22 05:40:02.628731479 +0000 UTC m=+0.065291325 container create c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:40:02 compute-0 systemd[1]: Started libpod-conmon-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope.
Nov 22 05:40:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:40:02 compute-0 podman[207536]: 2025-11-22 05:40:02.603229293 +0000 UTC m=+0.039789209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:40:02 compute-0 python3.9[207531]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:02 compute-0 podman[207536]: 2025-11-22 05:40:02.719817426 +0000 UTC m=+0.156377282 container init c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:40:02 compute-0 sudo[207528]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:02 compute-0 podman[207536]: 2025-11-22 05:40:02.734936271 +0000 UTC m=+0.171496147 container start c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:40:02 compute-0 podman[207536]: 2025-11-22 05:40:02.739761732 +0000 UTC m=+0.176328438 container attach c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:40:03 compute-0 ceph-mon[75840]: pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:03 compute-0 sudo[207677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plblcahzvliwxkbkymozcohvpzsqwqpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790002.1776078-554-159655940205728/AnsiballZ_copy.py'
Nov 22 05:40:03 compute-0 sudo[207677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:03 compute-0 python3.9[207679]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790002.1776078-554-159655940205728/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:03 compute-0 sudo[207677]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:03 compute-0 pensive_joliot[207552]: {
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_id": 1,
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "type": "bluestore"
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     },
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_id": 2,
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "type": "bluestore"
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     },
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_id": 0,
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:         "type": "bluestore"
Nov 22 05:40:03 compute-0 pensive_joliot[207552]:     }
Nov 22 05:40:03 compute-0 pensive_joliot[207552]: }
Nov 22 05:40:03 compute-0 systemd[1]: libpod-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Deactivated successfully.
Nov 22 05:40:03 compute-0 systemd[1]: libpod-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Consumed 1.080s CPU time.
Nov 22 05:40:03 compute-0 podman[207536]: 2025-11-22 05:40:03.806182533 +0000 UTC m=+1.242742399 container died c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:40:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a73722c4cba49087ff0c4ba47587eeaa1035af63c3e466548c5c0f0954707d9-merged.mount: Deactivated successfully.
Nov 22 05:40:03 compute-0 podman[207536]: 2025-11-22 05:40:03.879284407 +0000 UTC m=+1.315844253 container remove c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:40:03 compute-0 systemd[1]: libpod-conmon-c677593669dcab5515a4f849d2ff6c1774ec74d9edba1997368e98d5a5f74cf7.scope: Deactivated successfully.
Nov 22 05:40:03 compute-0 sudo[207257]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:40:03 compute-0 sudo[207868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbxnlvrazjndkesfwlestqopxvurnym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790003.5771909-554-115549058670793/AnsiballZ_stat.py'
Nov 22 05:40:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:40:03 compute-0 sudo[207868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:40:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:40:03 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d9a61498-a88f-4f7e-bfb8-6b72853ce2b4 does not exist
Nov 22 05:40:03 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ca27401a-48bd-4194-96c5-48a7c9f7cac1 does not exist
Nov 22 05:40:04 compute-0 sudo[207871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:40:04 compute-0 sudo[207871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:04 compute-0 sudo[207871]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:04 compute-0 sudo[207896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:40:04 compute-0 sudo[207896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:40:04 compute-0 sudo[207896]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:04 compute-0 python3.9[207870]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:04 compute-0 sudo[207868]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:04 compute-0 sudo[208043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dljieiswcyravmvlhkvjknepqnjexpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790003.5771909-554-115549058670793/AnsiballZ_copy.py'
Nov 22 05:40:04 compute-0 sudo[208043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:04 compute-0 python3.9[208045]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763790003.5771909-554-115549058670793/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:04 compute-0 sudo[208043]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:40:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:40:04 compute-0 ceph-mon[75840]: pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:05 compute-0 sudo[208195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscoofbadtldgjudyjmhaapoookhsfwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790004.9276335-667-240275667074107/AnsiballZ_command.py'
Nov 22 05:40:05 compute-0 sudo[208195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:05 compute-0 python3.9[208197]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 22 05:40:05 compute-0 sudo[208195]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:06 compute-0 sudo[208348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpozeendomenirqxnuinvslhzlmnolbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790005.8466158-676-111633992223361/AnsiballZ_file.py'
Nov 22 05:40:06 compute-0 sudo[208348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:06 compute-0 python3.9[208350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:06 compute-0 sudo[208348]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:07 compute-0 sudo[208500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkjcrygqnbkufdflzeunszrbdiniskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790006.67176-676-257332491584557/AnsiballZ_file.py'
Nov 22 05:40:07 compute-0 sudo[208500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:07 compute-0 ceph-mon[75840]: pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:07 compute-0 python3.9[208502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:07 compute-0 sudo[208500]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:07 compute-0 sudo[208652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmtmmtbdihseghaxmtfaqcrecqozmaql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790007.5037086-676-133154654397924/AnsiballZ_file.py'
Nov 22 05:40:07 compute-0 sudo[208652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:08 compute-0 python3.9[208654]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:08 compute-0 sudo[208652]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:08 compute-0 sudo[208804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deszdmlmokcsteiyzfrotgcmuzvfrkil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790008.2444358-676-189783825949793/AnsiballZ_file.py'
Nov 22 05:40:08 compute-0 sudo[208804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:08 compute-0 python3.9[208806]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:08 compute-0 sudo[208804]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:09 compute-0 ceph-mon[75840]: pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:09 compute-0 sudo[208956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apujorzgbqdqlqewtucvlvqeyanzduua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790008.9364047-676-99086201573511/AnsiballZ_file.py'
Nov 22 05:40:09 compute-0 sudo[208956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:09 compute-0 python3.9[208958]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:09 compute-0 sudo[208956]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:10 compute-0 podman[209082]: 2025-11-22 05:40:10.145266613 +0000 UTC m=+0.053035086 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 05:40:10 compute-0 sudo[209128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpcythqttydgtcqnxbmiinmyfnxzwohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790009.774506-676-200944170030520/AnsiballZ_file.py'
Nov 22 05:40:10 compute-0 sudo[209128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:10 compute-0 python3.9[209130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:10 compute-0 sudo[209128]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:10 compute-0 sudo[209280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zisilfzhtakkjvocuzomiqzwgamgrzrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790010.478881-676-229820232039466/AnsiballZ_file.py'
Nov 22 05:40:10 compute-0 sudo[209280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:11 compute-0 python3.9[209282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:11 compute-0 sudo[209280]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:11 compute-0 ceph-mon[75840]: pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:11 compute-0 sudo[209432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwzrkkwygmuhjgobfdvfshypdagfigli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790011.2517405-676-263286004471440/AnsiballZ_file.py'
Nov 22 05:40:11 compute-0 sudo[209432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:11 compute-0 python3.9[209434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:11 compute-0 sudo[209432]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:12 compute-0 sudo[209584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmaayywebuaqrcegmvgrrxxozgtjfxiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790012.054718-676-225577071674659/AnsiballZ_file.py'
Nov 22 05:40:12 compute-0 sudo[209584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:12 compute-0 python3.9[209586]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:12 compute-0 sudo[209584]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:13 compute-0 ceph-mon[75840]: pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:13 compute-0 sudo[209736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcmexcjqvsjmwqkvpwtdfolombubrvzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790012.8586166-676-61350038240061/AnsiballZ_file.py'
Nov 22 05:40:13 compute-0 sudo[209736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:13 compute-0 python3.9[209738]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:13 compute-0 sudo[209736]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:13 compute-0 sudo[209888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvxpaxoakznvtemjdfwqvpztoqgsjdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790013.6663065-676-70386207931099/AnsiballZ_file.py'
Nov 22 05:40:13 compute-0 sudo[209888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:14 compute-0 python3.9[209890]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:14 compute-0 sudo[209888]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:14 compute-0 sudo[210040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizjtclgmfnbctdpvdbpwyetovuqymec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790014.3651717-676-92995844508797/AnsiballZ_file.py'
Nov 22 05:40:14 compute-0 sudo[210040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:15 compute-0 python3.9[210042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:15 compute-0 sudo[210040]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:15 compute-0 ceph-mon[75840]: pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:15 compute-0 sudo[210192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxrutinwxtvaswtulrnkszvtxkufrdox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790015.1950297-676-257879245526110/AnsiballZ_file.py'
Nov 22 05:40:15 compute-0 sudo[210192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:15 compute-0 python3.9[210194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:15 compute-0 sudo[210192]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:16 compute-0 sudo[210344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewuyeupmyrrssubecksvwaetqcrgnrgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790015.9441886-676-63401479204662/AnsiballZ_file.py'
Nov 22 05:40:16 compute-0 sudo[210344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:16 compute-0 python3.9[210346]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:16 compute-0 sudo[210344]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:17 compute-0 sudo[210496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpjmbpiwaheqhzsntbqiscdzmzxpdif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790016.7447383-775-222023302326255/AnsiballZ_stat.py'
Nov 22 05:40:17 compute-0 sudo[210496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:17 compute-0 ceph-mon[75840]: pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:17 compute-0 python3.9[210498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:17 compute-0 sudo[210496]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:17 compute-0 sudo[210619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoepypspynrxehuqopsktfsnplpyshys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790016.7447383-775-222023302326255/AnsiballZ_copy.py'
Nov 22 05:40:17 compute-0 sudo[210619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:17 compute-0 python3.9[210621]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790016.7447383-775-222023302326255/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:17 compute-0 sudo[210619]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:18 compute-0 sudo[210771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrllvpjjxqtxwgwbemeelnanzmfdptzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790018.1939106-775-8314261775271/AnsiballZ_stat.py'
Nov 22 05:40:18 compute-0 sudo[210771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:18 compute-0 python3.9[210773]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:18 compute-0 sudo[210771]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:19 compute-0 ceph-mon[75840]: pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:19 compute-0 sudo[210894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seaxwjxijtxrkagzhcuxxxfvypluvswg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790018.1939106-775-8314261775271/AnsiballZ_copy.py'
Nov 22 05:40:19 compute-0 sudo[210894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:19 compute-0 python3.9[210896]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790018.1939106-775-8314261775271/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:19 compute-0 sudo[210894]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:20 compute-0 sudo[211046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwjtsbhqbzcdqxesdnkveqhxmhrdhsnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790019.6675162-775-28106747771147/AnsiballZ_stat.py'
Nov 22 05:40:20 compute-0 sudo[211046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:20 compute-0 python3.9[211048]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:20 compute-0 sudo[211046]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:20 compute-0 sudo[211169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxhyqoulqosszyyosufwyhlkwzgdwrhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790019.6675162-775-28106747771147/AnsiballZ_copy.py'
Nov 22 05:40:20 compute-0 sudo[211169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:20 compute-0 python3.9[211171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790019.6675162-775-28106747771147/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:20 compute-0 sudo[211169]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:21 compute-0 ceph-mon[75840]: pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:21 compute-0 sudo[211321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-corxualjgczkolsflncqjmxmlozkqjjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790021.0868194-775-231388770830619/AnsiballZ_stat.py'
Nov 22 05:40:21 compute-0 sudo[211321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:21 compute-0 python3.9[211323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:21 compute-0 sudo[211321]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:22 compute-0 sudo[211444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihmutivjypsvglgxjdsftepwsszmllz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790021.0868194-775-231388770830619/AnsiballZ_copy.py'
Nov 22 05:40:22 compute-0 sudo[211444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:22 compute-0 python3.9[211446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790021.0868194-775-231388770830619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:22 compute-0 sudo[211444]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:23 compute-0 sudo[211596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnugjbcvllekzsacxfvdsliaucotcwiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790022.6134028-775-228461831331911/AnsiballZ_stat.py'
Nov 22 05:40:23 compute-0 sudo[211596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:23 compute-0 ceph-mon[75840]: pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:23 compute-0 python3.9[211598]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:23 compute-0 sudo[211596]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:23 compute-0 sudo[211719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdggsixnfgvzkhbcegxgwxrraysbtpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790022.6134028-775-228461831331911/AnsiballZ_copy.py'
Nov 22 05:40:23 compute-0 sudo[211719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:23 compute-0 python3.9[211721]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790022.6134028-775-228461831331911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:23 compute-0 sudo[211719]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:24 compute-0 sudo[211871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxvyxetpmwvxtneqbtoitkjsudqueazm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790024.1103923-775-85823424669249/AnsiballZ_stat.py'
Nov 22 05:40:24 compute-0 sudo[211871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:24 compute-0 python3.9[211873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:24 compute-0 sudo[211871]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:25 compute-0 ceph-mon[75840]: pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:25 compute-0 sudo[211994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvbwuyxqhxavvwbaodlizkcnlliosya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790024.1103923-775-85823424669249/AnsiballZ_copy.py'
Nov 22 05:40:25 compute-0 sudo[211994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:25 compute-0 python3.9[211996]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790024.1103923-775-85823424669249/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:25 compute-0 sudo[211994]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:26 compute-0 sudo[212146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tduocsnblnaxoelnjoosttwcmqajgaif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790025.633697-775-91129063595278/AnsiballZ_stat.py'
Nov 22 05:40:26 compute-0 sudo[212146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:26 compute-0 python3.9[212148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:26 compute-0 sudo[212146]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:26 compute-0 sudo[212269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reehfhxxbootwgawzbhudooemirvlmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790025.633697-775-91129063595278/AnsiballZ_copy.py'
Nov 22 05:40:26 compute-0 sudo[212269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:26 compute-0 python3.9[212271]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790025.633697-775-91129063595278/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:26 compute-0 sudo[212269]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:27 compute-0 ceph-mon[75840]: pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:27 compute-0 sudo[212421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvgbcqtjkqhzphfijlmuyhzprcvqydq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790027.0972097-775-49060545345133/AnsiballZ_stat.py'
Nov 22 05:40:27 compute-0 sudo[212421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:27 compute-0 python3.9[212423]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:27 compute-0 sudo[212421]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:28 compute-0 sudo[212544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kteunpnltqmtpfarqgflsfzrfpbbdsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790027.0972097-775-49060545345133/AnsiballZ_copy.py'
Nov 22 05:40:28 compute-0 sudo[212544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:28 compute-0 python3.9[212546]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790027.0972097-775-49060545345133/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:28 compute-0 sudo[212544]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:28 compute-0 sudo[212696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugltdlqyepkcdqjxkxdaaotzoklaknu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790028.5408368-775-216727225126009/AnsiballZ_stat.py'
Nov 22 05:40:28 compute-0 sudo[212696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:29 compute-0 python3.9[212698]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:29 compute-0 sudo[212696]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:29 compute-0 ceph-mon[75840]: pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:29 compute-0 sudo[212819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axmdeycoclivavjagukmwdzpwpbjsaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790028.5408368-775-216727225126009/AnsiballZ_copy.py'
Nov 22 05:40:29 compute-0 sudo[212819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:29 compute-0 python3.9[212821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790028.5408368-775-216727225126009/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:29 compute-0 sudo[212819]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:30 compute-0 sudo[212971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzipgwhikfbarjkewvxjoiantegjrpcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790029.9407368-775-26006056738333/AnsiballZ_stat.py'
Nov 22 05:40:30 compute-0 sudo[212971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:30 compute-0 python3.9[212973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:30 compute-0 sudo[212971]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:30 compute-0 sudo[213094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvinvqumqyeavhxfhxslxhcvmoufucqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790029.9407368-775-26006056738333/AnsiballZ_copy.py'
Nov 22 05:40:30 compute-0 sudo[213094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:31 compute-0 python3.9[213096]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790029.9407368-775-26006056738333/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:31 compute-0 sudo[213094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:31 compute-0 ceph-mon[75840]: pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:31 compute-0 sudo[213246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvogpiojawsrdiqoaesyolhmyhngxia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790031.3101478-775-8046351819222/AnsiballZ_stat.py'
Nov 22 05:40:31 compute-0 sudo[213246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:31 compute-0 python3.9[213248]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:31 compute-0 sudo[213246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:32 compute-0 podman[213317]: 2025-11-22 05:40:32.28168766 +0000 UTC m=+0.123158950 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:40:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:32 compute-0 sudo[213395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvrzsgguotngrjqfbihwmpkpeluqxyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790031.3101478-775-8046351819222/AnsiballZ_copy.py'
Nov 22 05:40:32 compute-0 sudo[213395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:32 compute-0 python3.9[213397]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790031.3101478-775-8046351819222/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:32 compute-0 sudo[213395]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:33 compute-0 sudo[213547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crncxugaggqnocncttujmvvvhdfispya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790032.735274-775-174350781137421/AnsiballZ_stat.py'
Nov 22 05:40:33 compute-0 sudo[213547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:33 compute-0 ceph-mon[75840]: pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:33 compute-0 python3.9[213549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:33 compute-0 sudo[213547]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:33 compute-0 sudo[213670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqvuopjqvnxoyzrglfaqvddstzvajkye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790032.735274-775-174350781137421/AnsiballZ_copy.py'
Nov 22 05:40:33 compute-0 sudo[213670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:34 compute-0 python3.9[213672]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790032.735274-775-174350781137421/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:34 compute-0 sudo[213670]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:34 compute-0 sudo[213822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnoajiwpeqlqjyvushrbntoxqphkzbwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790034.2511065-775-24247054273734/AnsiballZ_stat.py'
Nov 22 05:40:34 compute-0 sudo[213822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:34 compute-0 python3.9[213824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:34 compute-0 sudo[213822]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:35 compute-0 ceph-mon[75840]: pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:35 compute-0 sudo[213945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpzwtgmhlctaggyxqzkkfkjlddoifcbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790034.2511065-775-24247054273734/AnsiballZ_copy.py'
Nov 22 05:40:35 compute-0 sudo[213945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:35 compute-0 python3.9[213947]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790034.2511065-775-24247054273734/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:35 compute-0 sudo[213945]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:36 compute-0 sudo[214097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xybrdjmfluydxyfrxzafmgtkcqsyhowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790035.7544074-775-170775090634160/AnsiballZ_stat.py'
Nov 22 05:40:36 compute-0 sudo[214097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:36 compute-0 python3.9[214099]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:40:36 compute-0 sudo[214097]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.903 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:40:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.903 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:40:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:40:36.904 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:40:36 compute-0 sudo[214220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvozowmytflzqubpiltlpajkdzlkafrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790035.7544074-775-170775090634160/AnsiballZ_copy.py'
Nov 22 05:40:36 compute-0 sudo[214220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:37 compute-0 python3.9[214222]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790035.7544074-775-170775090634160/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:37 compute-0 sudo[214220]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:37 compute-0 ceph-mon[75840]: pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:38 compute-0 python3.9[214372]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:40:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:38 compute-0 sudo[214525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fegptgiemwdwhgiclhdicmmkopibkpoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790038.2923717-981-64847739390474/AnsiballZ_seboolean.py'
Nov 22 05:40:38 compute-0 sudo[214525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:39 compute-0 python3.9[214527]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 22 05:40:39 compute-0 ceph-mon[75840]: pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:40 compute-0 sudo[214525]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:40 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 22 05:40:41 compute-0 sudo[214699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfibdstkkhuznnougkujucxpshgbnqzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790040.6114345-989-64097886779408/AnsiballZ_copy.py'
Nov 22 05:40:41 compute-0 podman[214655]: 2025-11-22 05:40:41.031872926 +0000 UTC m=+0.080768961 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:40:41 compute-0 sudo[214699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:41 compute-0 python3.9[214703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:41 compute-0 sudo[214699]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:41 compute-0 ceph-mon[75840]: pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:41 compute-0 sudo[214853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbbdrftvjosbmwcppugvxmiieubjbnbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790041.4819534-989-107282557139576/AnsiballZ_copy.py'
Nov 22 05:40:41 compute-0 sudo[214853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:41 compute-0 python3.9[214855]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:41 compute-0 sudo[214853]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:42 compute-0 sudo[215005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firzioilhssmasrtlotevmulouudhidg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790042.1823375-989-188147752153327/AnsiballZ_copy.py'
Nov 22 05:40:42 compute-0 sudo[215005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:42 compute-0 python3.9[215007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:42 compute-0 sudo[215005]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:43 compute-0 sudo[215157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjzrynrdybepaagifeakqxrcwchfajc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790043.0536613-989-150230853700982/AnsiballZ_copy.py'
Nov 22 05:40:43 compute-0 sudo[215157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:43 compute-0 ceph-mon[75840]: pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:43 compute-0 python3.9[215159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:43 compute-0 sudo[215157]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:40:43
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes']
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:40:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:40:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:44 compute-0 sudo[215309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqcpifssptrubbhhsqnkwwlchwadevda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790043.855512-989-162630456936556/AnsiballZ_copy.py'
Nov 22 05:40:44 compute-0 sudo[215309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:44 compute-0 python3.9[215311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:44 compute-0 sudo[215309]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:45 compute-0 sudo[215461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikpigfnuitxuciogpibwjtckygvzszjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790044.6970217-1025-122518122236797/AnsiballZ_copy.py'
Nov 22 05:40:45 compute-0 sudo[215461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:45 compute-0 python3.9[215463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:45 compute-0 sudo[215461]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:45 compute-0 ceph-mon[75840]: pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:45 compute-0 sudo[215613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzjcyskubekceqpagtfhoykpjafhturx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790045.4967287-1025-236681553518318/AnsiballZ_copy.py'
Nov 22 05:40:45 compute-0 sudo[215613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:46 compute-0 python3.9[215615]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:46 compute-0 sudo[215613]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:46 compute-0 sudo[215765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyivflvqmcctsletiqphqkuphxythjbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790046.2443235-1025-182485679612798/AnsiballZ_copy.py'
Nov 22 05:40:46 compute-0 sudo[215765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:46 compute-0 python3.9[215767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:46 compute-0 sudo[215765]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:47 compute-0 sudo[215917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnhxgqugbqnzbimgjyqsfzaysyflpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790047.0024815-1025-80011974782625/AnsiballZ_copy.py'
Nov 22 05:40:47 compute-0 sudo[215917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:47 compute-0 ceph-mon[75840]: pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:47 compute-0 python3.9[215919]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:47 compute-0 sudo[215917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:48 compute-0 sudo[216069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfabalmipxftjfnoenfihvabpyzoeiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790047.8088388-1025-232132136013528/AnsiballZ_copy.py'
Nov 22 05:40:48 compute-0 sudo[216069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:48 compute-0 python3.9[216071]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:48 compute-0 sudo[216069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:49 compute-0 sudo[216221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfkoqmhexroumeoymaetqotbgupvivpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790048.677358-1061-168266363008199/AnsiballZ_systemd.py'
Nov 22 05:40:49 compute-0 sudo[216221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:49 compute-0 python3.9[216223]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:40:49 compute-0 systemd[1]: Reloading.
Nov 22 05:40:49 compute-0 ceph-mon[75840]: pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:49 compute-0 systemd-rc-local-generator[216249]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:40:49 compute-0 systemd-sysv-generator[216256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:40:49 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 22 05:40:49 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 22 05:40:49 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 22 05:40:49 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 22 05:40:49 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 22 05:40:49 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 22 05:40:50 compute-0 sudo[216221]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:50 compute-0 sudo[216414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egyoyrgxhevastqnokqwnlleikvgqwmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790050.2264924-1061-154397302207798/AnsiballZ_systemd.py'
Nov 22 05:40:50 compute-0 sudo[216414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:50 compute-0 python3.9[216416]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:40:50 compute-0 systemd[1]: Reloading.
Nov 22 05:40:51 compute-0 systemd-rc-local-generator[216441]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:40:51 compute-0 systemd-sysv-generator[216447]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:40:51 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 22 05:40:51 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 22 05:40:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 22 05:40:51 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 22 05:40:51 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 22 05:40:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 22 05:40:51 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 22 05:40:51 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 05:40:51 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 05:40:51 compute-0 sudo[216414]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:51 compute-0 ceph-mon[75840]: pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:51 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 22 05:40:51 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 22 05:40:51 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 22 05:40:52 compute-0 sudo[216638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olkzshlmdmcvtiqypqborbdzezksmpsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790051.651547-1061-214916568686705/AnsiballZ_systemd.py'
Nov 22 05:40:52 compute-0 sudo[216638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:52 compute-0 python3.9[216640]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:40:52 compute-0 systemd[1]: Reloading.
Nov 22 05:40:52 compute-0 systemd-sysv-generator[216673]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:40:52 compute-0 systemd-rc-local-generator[216669]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:40:52 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 22 05:40:52 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 22 05:40:52 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 22 05:40:52 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 22 05:40:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 05:40:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 05:40:52 compute-0 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4af265fd-1c59-42ae-8de8-c99c06f445ef
Nov 22 05:40:52 compute-0 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 05:40:52 compute-0 sudo[216638]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:52 compute-0 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4af265fd-1c59-42ae-8de8-c99c06f445ef
Nov 22 05:40:52 compute-0 setroubleshoot[216453]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:40:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:40:53 compute-0 sudo[216851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmqstdnqsnekwkhrfyietnikaxgooxvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790052.9285443-1061-107779699116420/AnsiballZ_systemd.py'
Nov 22 05:40:53 compute-0 sudo[216851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:53 compute-0 ceph-mon[75840]: pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:53 compute-0 python3.9[216853]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:40:53 compute-0 systemd[1]: Reloading.
Nov 22 05:40:53 compute-0 systemd-rc-local-generator[216878]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:40:53 compute-0 systemd-sysv-generator[216883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:40:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:54 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 22 05:40:54 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 22 05:40:54 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 22 05:40:54 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 22 05:40:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 22 05:40:54 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 22 05:40:54 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 22 05:40:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 22 05:40:54 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 22 05:40:54 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 22 05:40:54 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 05:40:54 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 05:40:54 compute-0 sudo[216851]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:54 compute-0 sudo[217066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kztgddckyjucfuvhwbvjgrydyxsghffh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790054.416103-1061-80790642441735/AnsiballZ_systemd.py'
Nov 22 05:40:54 compute-0 sudo[217066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:55 compute-0 python3.9[217068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:40:55 compute-0 systemd[1]: Reloading.
Nov 22 05:40:55 compute-0 systemd-rc-local-generator[217097]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:40:55 compute-0 systemd-sysv-generator[217101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:40:55 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 22 05:40:55 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 22 05:40:55 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 22 05:40:55 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 22 05:40:55 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 22 05:40:55 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 22 05:40:55 compute-0 ceph-mon[75840]: pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:55 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 05:40:55 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 05:40:55 compute-0 sudo[217066]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:56 compute-0 sudo[217279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnufgceoxdkltwemjmguztlzbtyderyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790056.0558193-1098-127585592100665/AnsiballZ_file.py'
Nov 22 05:40:56 compute-0 sudo[217279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:56 compute-0 python3.9[217281]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:40:56 compute-0 sudo[217279]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:57 compute-0 sudo[217431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncsmpqhljcmaldvnucsapkuszzqptgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790056.963495-1106-141468587458053/AnsiballZ_find.py'
Nov 22 05:40:57 compute-0 sudo[217431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:40:57 compute-0 python3.9[217433]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:40:57 compute-0 sudo[217431]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:57 compute-0 ceph-mon[75840]: pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:40:58 compute-0 sudo[217583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqmfgeylbbifoulzkhrfgfyxheycgkgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790057.7666142-1114-17987826156501/AnsiballZ_command.py'
Nov 22 05:40:58 compute-0 sudo[217583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:40:58 compute-0 python3.9[217585]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:40:58 compute-0 sudo[217583]: pam_unix(sudo:session): session closed for user root
Nov 22 05:40:59 compute-0 python3.9[217739]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:40:59 compute-0 ceph-mon[75840]: pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:00 compute-0 python3.9[217889]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:00 compute-0 python3.9[218010]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790059.6562026-1133-189131636320962/.source.xml follow=False _original_basename=secret.xml.j2 checksum=5662cc1bfbb8c37741b42345b876b94b094e15c0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:01 compute-0 sudo[218160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmnaogznncctepbcgherizbtzfnfpdba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790061.047673-1148-49313339361309/AnsiballZ_command.py'
Nov 22 05:41:01 compute-0 sudo[218160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:01 compute-0 ceph-mon[75840]: pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:01 compute-0 python3.9[218162]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 13fdadc6-d566-5465-9ac8-a148ef130da1
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:01 compute-0 polkitd[44246]: Registered Authentication Agent for unix-process:218164:351510 (system bus name :1.2849 [pkttyagent --process 218164 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 05:41:01 compute-0 polkitd[44246]: Unregistered Authentication Agent for unix-process:218164:351510 (system bus name :1.2849, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 05:41:01 compute-0 polkitd[44246]: Registered Authentication Agent for unix-process:218163:351509 (system bus name :1.2850 [pkttyagent --process 218163 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 05:41:01 compute-0 polkitd[44246]: Unregistered Authentication Agent for unix-process:218163:351509 (system bus name :1.2850, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 05:41:01 compute-0 sudo[218160]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:02 compute-0 podman[218298]: 2025-11-22 05:41:02.472423724 +0000 UTC m=+0.116939233 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:41:02 compute-0 python3.9[218342]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:02 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 22 05:41:02 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 22 05:41:03 compute-0 sudo[218501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvtxpjxusulplwpnsheesluoylwekawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790062.891574-1164-38025312770819/AnsiballZ_command.py'
Nov 22 05:41:03 compute-0 sudo[218501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:03 compute-0 sudo[218501]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:03 compute-0 ceph-mon[75840]: pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:04 compute-0 sudo[218654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyprqvxxmrvuckazdcjqnugqbeashdgo ; FSID=13fdadc6-d566-5465-9ac8-a148ef130da1 KEY=AQDNSCFpAAAAABAAIxLSh4M1I5A41RBE4yCAiQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790063.7019584-1172-237438313016363/AnsiballZ_command.py'
Nov 22 05:41:04 compute-0 sudo[218654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:04 compute-0 sudo[218657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:04 compute-0 sudo[218657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:04 compute-0 sudo[218657]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:04 compute-0 sudo[218682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:41:04 compute-0 sudo[218682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:04 compute-0 sudo[218682]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:04 compute-0 sudo[218707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:04 compute-0 sudo[218707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:04 compute-0 sudo[218707]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:04 compute-0 polkitd[44246]: Registered Authentication Agent for unix-process:218715:351771 (system bus name :1.2856 [pkttyagent --process 218715 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 05:41:04 compute-0 polkitd[44246]: Unregistered Authentication Agent for unix-process:218715:351771 (system bus name :1.2856, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 05:41:04 compute-0 sudo[218654]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:04 compute-0 sudo[218738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:41:04 compute-0 sudo[218738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:04 compute-0 sudo[218937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqgpnpnivfnksdsyfcaaffssjthgublb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790064.6025567-1180-161545181647597/AnsiballZ_copy.py'
Nov 22 05:41:04 compute-0 sudo[218937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:05 compute-0 sudo[218738]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4cc3c8d1-af6e-4da9-8c97-8be3d5384614 does not exist
Nov 22 05:41:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 62e7be59-e2f7-4506-accd-332bbfc7673c does not exist
Nov 22 05:41:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 154b944e-b53f-484b-b313-3793d743b992 does not exist
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:41:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:41:05 compute-0 sudo[218946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:05 compute-0 sudo[218946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:05 compute-0 sudo[218946]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:05 compute-0 python3.9[218945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:05 compute-0 sudo[218937]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:05 compute-0 sudo[218971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:41:05 compute-0 sudo[218971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:05 compute-0 sudo[218971]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:05 compute-0 sudo[219019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:05 compute-0 sudo[219019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:05 compute-0 sudo[219019]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:05 compute-0 sudo[219045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:41:05 compute-0 sudo[219045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:05 compute-0 ceph-mon[75840]: pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:41:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.80373185 +0000 UTC m=+0.046383216 container create f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:41:05 compute-0 systemd[1]: Started libpod-conmon-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope.
Nov 22 05:41:05 compute-0 sudo[219252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eopfqkiyzbyjkshmuykyjworfxtzeqkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790065.459752-1188-181472067933015/AnsiballZ_stat.py'
Nov 22 05:41:05 compute-0 sudo[219252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.781303123 +0000 UTC m=+0.023954489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.901036829 +0000 UTC m=+0.143688225 container init f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.909975947 +0000 UTC m=+0.152627293 container start f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.913600363 +0000 UTC m=+0.156251789 container attach f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:41:05 compute-0 confident_bardeen[219254]: 167 167
Nov 22 05:41:05 compute-0 systemd[1]: libpod-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope: Deactivated successfully.
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.918772981 +0000 UTC m=+0.161424337 container died f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f12749f5ec03b8961b78fa3d59516650021300ba4fac59d74cfdf674cdba5fe-merged.mount: Deactivated successfully.
Nov 22 05:41:05 compute-0 podman[219203]: 2025-11-22 05:41:05.965629909 +0000 UTC m=+0.208281265 container remove f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:41:05 compute-0 systemd[1]: libpod-conmon-f88d0264beb786dc3744922525fe0c29fb84b6b71b57d67808cb455cda2c1382.scope: Deactivated successfully.
Nov 22 05:41:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:06 compute-0 python3.9[219256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:06 compute-0 sudo[219252]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:06 compute-0 podman[219278]: 2025-11-22 05:41:06.177178209 +0000 UTC m=+0.059964227 container create 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:41:06 compute-0 systemd[1]: Started libpod-conmon-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope.
Nov 22 05:41:06 compute-0 podman[219278]: 2025-11-22 05:41:06.147913 +0000 UTC m=+0.030699058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:06 compute-0 podman[219278]: 2025-11-22 05:41:06.279862131 +0000 UTC m=+0.162648159 container init 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:41:06 compute-0 podman[219278]: 2025-11-22 05:41:06.293804812 +0000 UTC m=+0.176590800 container start 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:41:06 compute-0 podman[219278]: 2025-11-22 05:41:06.297940502 +0000 UTC m=+0.180726490 container attach 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:41:06 compute-0 sudo[219419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsfyhqyornfqmaxpvixuxzzwstcivakn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790065.459752-1188-181472067933015/AnsiballZ_copy.py'
Nov 22 05:41:06 compute-0 sudo[219419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:06 compute-0 python3.9[219421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790065.459752-1188-181472067933015/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:06 compute-0 sudo[219419]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:07 compute-0 suspicious_pasteur[219320]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:41:07 compute-0 suspicious_pasteur[219320]: --> relative data size: 1.0
Nov 22 05:41:07 compute-0 suspicious_pasteur[219320]: --> All data devices are unavailable
Nov 22 05:41:07 compute-0 systemd[1]: libpod-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Deactivated successfully.
Nov 22 05:41:07 compute-0 systemd[1]: libpod-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Consumed 1.075s CPU time.
Nov 22 05:41:07 compute-0 podman[219278]: 2025-11-22 05:41:07.434994752 +0000 UTC m=+1.317780760 container died 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:41:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3b740741b421aee7c6b353bb5e43b1ab5be00320807d8e9ae4767d15f4ec34c-merged.mount: Deactivated successfully.
Nov 22 05:41:07 compute-0 podman[219278]: 2025-11-22 05:41:07.516134452 +0000 UTC m=+1.398920470 container remove 047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:41:07 compute-0 sudo[219606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqfqwmbnrbjorylchpvenadcivegewpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790067.1313531-1204-199058800783434/AnsiballZ_file.py'
Nov 22 05:41:07 compute-0 systemd[1]: libpod-conmon-047a08f025761e59ca33b4363f2abe393b9bbb99d364b9e6696c4e37e674af80.scope: Deactivated successfully.
Nov 22 05:41:07 compute-0 sudo[219606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:07 compute-0 sudo[219045]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 sudo[219609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:07 compute-0 sudo[219609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:07 compute-0 sudo[219609]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 ceph-mon[75840]: pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:07 compute-0 python3.9[219608]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:07 compute-0 sudo[219634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:41:07 compute-0 sudo[219634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:07 compute-0 sudo[219634]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 sudo[219606]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 sudo[219659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:07 compute-0 sudo[219659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:07 compute-0 sudo[219659]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:07 compute-0 sudo[219708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:41:07 compute-0 sudo[219708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.277369781 +0000 UTC m=+0.065302439 container create 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:41:08 compute-0 systemd[1]: Started libpod-conmon-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope.
Nov 22 05:41:08 compute-0 sudo[219917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxifynynyatovoflngyyaijprelvmcpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790067.9587507-1212-177314893100844/AnsiballZ_stat.py'
Nov 22 05:41:08 compute-0 sudo[219917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.250456814 +0000 UTC m=+0.038389542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.372583245 +0000 UTC m=+0.160515903 container init 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.3840777 +0000 UTC m=+0.172010348 container start 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.388012485 +0000 UTC m=+0.175945173 container attach 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:41:08 compute-0 quizzical_hoover[219919]: 167 167
Nov 22 05:41:08 compute-0 systemd[1]: libpod-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope: Deactivated successfully.
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.393352657 +0000 UTC m=+0.181285335 container died 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-09289beda1d720a2a225c01679d294b255b9d0a23ed9a68509ffa8125cfeb389-merged.mount: Deactivated successfully.
Nov 22 05:41:08 compute-0 podman[219869]: 2025-11-22 05:41:08.448942576 +0000 UTC m=+0.236875264 container remove 0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hoover, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:41:08 compute-0 systemd[1]: libpod-conmon-0294c4f9392f01348d52b71deb13cd58dd27e95d03eff8c3accc00997bdce115.scope: Deactivated successfully.
Nov 22 05:41:08 compute-0 python3.9[219921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:08 compute-0 sudo[219917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:08 compute-0 podman[219945]: 2025-11-22 05:41:08.708269308 +0000 UTC m=+0.070706513 container create 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:41:08 compute-0 systemd[1]: Started libpod-conmon-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope.
Nov 22 05:41:08 compute-0 podman[219945]: 2025-11-22 05:41:08.681932467 +0000 UTC m=+0.044369672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:08 compute-0 sshd-session[219879]: Invalid user slot from 80.94.92.166 port 59124
Nov 22 05:41:08 compute-0 podman[219945]: 2025-11-22 05:41:08.841582526 +0000 UTC m=+0.204019721 container init 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 05:41:08 compute-0 podman[219945]: 2025-11-22 05:41:08.849837496 +0000 UTC m=+0.212274691 container start 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:41:08 compute-0 podman[219945]: 2025-11-22 05:41:08.854176051 +0000 UTC m=+0.216613306 container attach 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:41:08 compute-0 sudo[220039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfzkfafdifvxklxvsdlnwscwgblujwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790067.9587507-1212-177314893100844/AnsiballZ_file.py'
Nov 22 05:41:08 compute-0 sudo[220039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:08 compute-0 sshd-session[219879]: Connection closed by invalid user slot 80.94.92.166 port 59124 [preauth]
Nov 22 05:41:09 compute-0 python3.9[220041]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:09 compute-0 sudo[220039]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:09 compute-0 wonderful_jones[220002]: {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     "0": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "devices": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "/dev/loop3"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             ],
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_name": "ceph_lv0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_size": "21470642176",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "name": "ceph_lv0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "tags": {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_name": "ceph",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.crush_device_class": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.encrypted": "0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_id": "0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.vdo": "0"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             },
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "vg_name": "ceph_vg0"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         }
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     ],
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     "1": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "devices": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "/dev/loop4"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             ],
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_name": "ceph_lv1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_size": "21470642176",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "name": "ceph_lv1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "tags": {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_name": "ceph",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.crush_device_class": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.encrypted": "0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_id": "1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.vdo": "0"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             },
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "vg_name": "ceph_vg1"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         }
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     ],
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     "2": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "devices": [
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "/dev/loop5"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             ],
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_name": "ceph_lv2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_size": "21470642176",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "name": "ceph_lv2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "tags": {
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.cluster_name": "ceph",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.crush_device_class": "",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.encrypted": "0",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osd_id": "2",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:                 "ceph.vdo": "0"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             },
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "type": "block",
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:             "vg_name": "ceph_vg2"
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:         }
Nov 22 05:41:09 compute-0 wonderful_jones[220002]:     ]
Nov 22 05:41:09 compute-0 wonderful_jones[220002]: }
Nov 22 05:41:09 compute-0 ceph-mon[75840]: pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:09 compute-0 systemd[1]: libpod-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope: Deactivated successfully.
Nov 22 05:41:09 compute-0 podman[219945]: 2025-11-22 05:41:09.67103874 +0000 UTC m=+1.033475975 container died 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc56de9641e4d8d93007962b2da6018702d08ff82d36a413efff9d425abf7a2-merged.mount: Deactivated successfully.
Nov 22 05:41:09 compute-0 podman[219945]: 2025-11-22 05:41:09.756805842 +0000 UTC m=+1.119243017 container remove 1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_jones, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 05:41:09 compute-0 systemd[1]: libpod-conmon-1aaa3b75186c2fa08b8a32735a6e4f36895791415deee17347f6392e4e79fa80.scope: Deactivated successfully.
Nov 22 05:41:09 compute-0 sudo[219708]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:09 compute-0 sudo[220207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urehzlqljzhsgojintbpugtbtqqrmmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790069.3763762-1224-238689195673786/AnsiballZ_stat.py'
Nov 22 05:41:09 compute-0 sudo[220207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:09 compute-0 sudo[220210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:09 compute-0 sudo[220210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:09 compute-0 sudo[220210]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:09 compute-0 sudo[220235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:41:09 compute-0 sudo[220235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:09 compute-0 sudo[220235]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:10 compute-0 python3.9[220209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:10 compute-0 sudo[220207]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:10 compute-0 sudo[220260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:10 compute-0 sudo[220260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:10 compute-0 sudo[220260]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:10 compute-0 sudo[220291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:41:10 compute-0 sudo[220291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:10 compute-0 sudo[220387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eszzzsnbgrvzlpxipwmdfnbnmkqdqymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790069.3763762-1224-238689195673786/AnsiballZ_file.py'
Nov 22 05:41:10 compute-0 sudo[220387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:10 compute-0 python3.9[220395]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.o4lc69sp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:10 compute-0 sudo[220387]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.537383896 +0000 UTC m=+0.054604904 container create 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:41:10 compute-0 systemd[1]: Started libpod-conmon-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope.
Nov 22 05:41:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.512301739 +0000 UTC m=+0.029522817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.621381012 +0000 UTC m=+0.138602060 container init 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.632339674 +0000 UTC m=+0.149560702 container start 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.636807642 +0000 UTC m=+0.154028660 container attach 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:41:10 compute-0 dazzling_chebyshev[220469]: 167 167
Nov 22 05:41:10 compute-0 systemd[1]: libpod-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope: Deactivated successfully.
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.638525868 +0000 UTC m=+0.155746926 container died 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d367b3c00eb2434a68c3bc04adf875dfd3b45ed2a92d865f708854922e73357-merged.mount: Deactivated successfully.
Nov 22 05:41:10 compute-0 podman[220428]: 2025-11-22 05:41:10.676731984 +0000 UTC m=+0.193952982 container remove 8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:41:10 compute-0 ceph-mon[75840]: pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:10 compute-0 systemd[1]: libpod-conmon-8b155276ddd3c0cc2ac694e5a451f225af85c29583cd9bbb494ad720c17b39e8.scope: Deactivated successfully.
Nov 22 05:41:10 compute-0 podman[220544]: 2025-11-22 05:41:10.851851945 +0000 UTC m=+0.048853272 container create 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:41:10 compute-0 systemd[1]: Started libpod-conmon-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope.
Nov 22 05:41:10 compute-0 podman[220544]: 2025-11-22 05:41:10.829461509 +0000 UTC m=+0.026462906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:41:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:41:10 compute-0 podman[220544]: 2025-11-22 05:41:10.957794014 +0000 UTC m=+0.154795441 container init 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:41:10 compute-0 podman[220544]: 2025-11-22 05:41:10.969352232 +0000 UTC m=+0.166353559 container start 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:41:10 compute-0 podman[220544]: 2025-11-22 05:41:10.973900193 +0000 UTC m=+0.170901540 container attach 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:41:11 compute-0 sudo[220639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuiwvwxwrfnrklbdorxdoelazocbrjra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790070.7121084-1236-75326783798223/AnsiballZ_stat.py'
Nov 22 05:41:11 compute-0 sudo[220639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:11 compute-0 podman[220641]: 2025-11-22 05:41:11.188679959 +0000 UTC m=+0.089940174 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 05:41:11 compute-0 python3.9[220642]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:11 compute-0 sudo[220639]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:11 compute-0 sudo[220737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkefaehbhhgoqnlxzdwpwekmvjfqhfpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790070.7121084-1236-75326783798223/AnsiballZ_file.py'
Nov 22 05:41:11 compute-0 sudo[220737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:11 compute-0 python3.9[220740]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:11 compute-0 sudo[220737]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:12 compute-0 magical_yalow[220587]: {
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_id": 1,
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "type": "bluestore"
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     },
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_id": 2,
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "type": "bluestore"
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     },
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_id": 0,
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:41:12 compute-0 magical_yalow[220587]:         "type": "bluestore"
Nov 22 05:41:12 compute-0 magical_yalow[220587]:     }
Nov 22 05:41:12 compute-0 magical_yalow[220587]: }
Nov 22 05:41:12 compute-0 systemd[1]: libpod-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Deactivated successfully.
Nov 22 05:41:12 compute-0 systemd[1]: libpod-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Consumed 1.139s CPU time.
Nov 22 05:41:12 compute-0 podman[220544]: 2025-11-22 05:41:12.104151523 +0000 UTC m=+1.301152850 container died 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:41:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b2471173cdb435d8d466ddf975e899bd81849b5db2a3a3c6ed71aed43928d93-merged.mount: Deactivated successfully.
Nov 22 05:41:12 compute-0 podman[220544]: 2025-11-22 05:41:12.187772948 +0000 UTC m=+1.384774285 container remove 100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:41:12 compute-0 systemd[1]: libpod-conmon-100b64b010f45d737d96a061ab96edd64b7dc8b48e76a80365551f36beeafabd.scope: Deactivated successfully.
Nov 22 05:41:12 compute-0 sudo[220291]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:41:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:41:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:12 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0720206e-d79f-4e65-9ef4-4b85840d88b1 does not exist
Nov 22 05:41:12 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 995ddc2e-729f-4868-8572-cbb60fa31849 does not exist
Nov 22 05:41:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:12 compute-0 sudo[220878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:41:12 compute-0 sudo[220878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:12 compute-0 sudo[220878]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:12 compute-0 sudo[220914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:41:12 compute-0 sudo[220914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:41:12 compute-0 sudo[220914]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:12 compute-0 sudo[220981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbqvezzuobztfpdtioayjttlxutulkcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790072.1026971-1249-164109495322087/AnsiballZ_command.py'
Nov 22 05:41:12 compute-0 sudo[220981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:12 compute-0 python3.9[220983]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:12 compute-0 sudo[220981]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:13 compute-0 ceph-mon[75840]: pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:41:13 compute-0 sudo[221134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmlgiizkrkfwffimdwypawywuttouqnq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763790072.9287343-1257-132623639541610/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 05:41:13 compute-0 sudo[221134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:13 compute-0 python3[221136]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 05:41:13 compute-0 sudo[221134]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:14 compute-0 sudo[221286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oboyvrvvfghhlypesiwjylwmwuijlzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790073.9710221-1265-224479597126050/AnsiballZ_stat.py'
Nov 22 05:41:14 compute-0 sudo[221286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:14 compute-0 python3.9[221288]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:14 compute-0 sudo[221286]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:14 compute-0 sudo[221364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spsrxtkznxwlvejfxnmbtbixqsnzxcvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790073.9710221-1265-224479597126050/AnsiballZ_file.py'
Nov 22 05:41:14 compute-0 sudo[221364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:15 compute-0 python3.9[221366]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:15 compute-0 sudo[221364]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:15 compute-0 ceph-mon[75840]: pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:15 compute-0 sudo[221516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkekqydkdksvecdeyvzettcizxsaqifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790075.32977-1277-23476506072684/AnsiballZ_stat.py'
Nov 22 05:41:15 compute-0 sudo[221516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:15 compute-0 python3.9[221518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:16 compute-0 sudo[221516]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:16 compute-0 sudo[221594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpoualnclgjjkjhcltutalehdevjibdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790075.32977-1277-23476506072684/AnsiballZ_file.py'
Nov 22 05:41:16 compute-0 sudo[221594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:16 compute-0 python3.9[221596]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:16 compute-0 sudo[221594]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:17 compute-0 sudo[221746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbzwdeiqvzemhxovbgkpnaqgsvzxhypw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790076.7671142-1289-48666483925032/AnsiballZ_stat.py'
Nov 22 05:41:17 compute-0 sudo[221746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:17 compute-0 ceph-mon[75840]: pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:17 compute-0 python3.9[221748]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:17 compute-0 sudo[221746]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:17 compute-0 sudo[221824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evpvrinvtwsfivmarzoowkbdcjarkqkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790076.7671142-1289-48666483925032/AnsiballZ_file.py'
Nov 22 05:41:17 compute-0 sudo[221824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:17 compute-0 python3.9[221826]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:18 compute-0 sudo[221824]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:18 compute-0 sudo[221976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsnhfsxpshoofmechimdwoylobzchawb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790078.2618928-1301-215693687205186/AnsiballZ_stat.py'
Nov 22 05:41:18 compute-0 sudo[221976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:18 compute-0 python3.9[221978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:18 compute-0 sudo[221976]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:19 compute-0 sudo[222054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfdehvzsxqzwmpvvcowgcrtohlcmpjyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790078.2618928-1301-215693687205186/AnsiballZ_file.py'
Nov 22 05:41:19 compute-0 sudo[222054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:19 compute-0 ceph-mon[75840]: pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:19 compute-0 python3.9[222056]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:19 compute-0 sudo[222054]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:20 compute-0 sudo[222206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkdrfnwuoreetxtimttzbsstpsakjrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790079.7000976-1313-188355291129806/AnsiballZ_stat.py'
Nov 22 05:41:20 compute-0 sudo[222206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:20 compute-0 python3.9[222208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:20 compute-0 sudo[222206]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:20 compute-0 sudo[222331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfouqbqdmmvfygeuawpvnshuzmsgradb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790079.7000976-1313-188355291129806/AnsiballZ_copy.py'
Nov 22 05:41:20 compute-0 sudo[222331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:20 compute-0 python3.9[222333]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763790079.7000976-1313-188355291129806/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:20 compute-0 sudo[222331]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:21 compute-0 ceph-mon[75840]: pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:21 compute-0 sudo[222483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqafqqqvxypjazdqelisojbvggxjqfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790081.1313794-1328-54236993775345/AnsiballZ_file.py'
Nov 22 05:41:21 compute-0 sudo[222483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:21 compute-0 python3.9[222485]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:21 compute-0 sudo[222483]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:22 compute-0 sudo[222635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejazulzywvmkbojahkizrylzoqndvkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790082.0383441-1336-174386132463260/AnsiballZ_command.py'
Nov 22 05:41:22 compute-0 sudo[222635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:22 compute-0 python3.9[222637]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:22 compute-0 sudo[222635]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:23 compute-0 ceph-mon[75840]: pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:23 compute-0 sudo[222790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdaxzndlhixooexqohyfzuzrfescebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790082.8261821-1344-132187112343305/AnsiballZ_blockinfile.py'
Nov 22 05:41:23 compute-0 sudo[222790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:23 compute-0 python3.9[222792]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:23 compute-0 sudo[222790]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:24 compute-0 sudo[222942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgymcliwksuaxerhjqbjqtikpescrsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790083.7806466-1353-278784307585778/AnsiballZ_command.py'
Nov 22 05:41:24 compute-0 sudo[222942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:24 compute-0 python3.9[222944]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:24 compute-0 sudo[222942]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:25 compute-0 sudo[223095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmvlpskikttfnhszvvmzpoxvpqwnvyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790084.7567718-1361-261718725351471/AnsiballZ_stat.py'
Nov 22 05:41:25 compute-0 sudo[223095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:25 compute-0 ceph-mon[75840]: pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:25 compute-0 python3.9[223097]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:41:25 compute-0 sudo[223095]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:25 compute-0 sudo[223249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiwdpjcsuwncszbvswnsnygyqvjqdnyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790085.607108-1369-262611233367288/AnsiballZ_command.py'
Nov 22 05:41:25 compute-0 sudo[223249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:26 compute-0 python3.9[223251]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:26 compute-0 sudo[223249]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:26 compute-0 sudo[223404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tifngcpxyaxeinoglctzxecdzmwbqjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790086.3691418-1377-47280585659842/AnsiballZ_file.py'
Nov 22 05:41:26 compute-0 sudo[223404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:26 compute-0 python3.9[223406]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:26 compute-0 sudo[223404]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:27 compute-0 ceph-mon[75840]: pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:27 compute-0 sudo[223556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjezqmaofpoidyhvpekbaivpgaesbpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790087.1766067-1385-213896729287548/AnsiballZ_stat.py'
Nov 22 05:41:27 compute-0 sudo[223556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:27 compute-0 python3.9[223558]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:27 compute-0 sudo[223556]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:28 compute-0 sudo[223679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhzynwsrutbbekkpozfjfjbsjjpphgnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790087.1766067-1385-213896729287548/AnsiballZ_copy.py'
Nov 22 05:41:28 compute-0 sudo[223679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:28 compute-0 python3.9[223681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790087.1766067-1385-213896729287548/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:28 compute-0 sudo[223679]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:29 compute-0 sudo[223831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daxqckpkhdhdnwqonuizixpkgslmnolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790088.694479-1400-154927054445206/AnsiballZ_stat.py'
Nov 22 05:41:29 compute-0 sudo[223831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:29 compute-0 ceph-mon[75840]: pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:29 compute-0 python3.9[223833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:29 compute-0 sudo[223831]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:29 compute-0 sudo[223954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-priungurjcfvndoavbbvtzhjuvddvnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790088.694479-1400-154927054445206/AnsiballZ_copy.py'
Nov 22 05:41:29 compute-0 sudo[223954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:30 compute-0 python3.9[223956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790088.694479-1400-154927054445206/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:30 compute-0 sudo[223954]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:30 compute-0 sudo[224106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bozxknvnvgjnpwhtmgmwufuewgzbzwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790090.3166318-1415-68787885280429/AnsiballZ_stat.py'
Nov 22 05:41:30 compute-0 sudo[224106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:30 compute-0 python3.9[224108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:41:30 compute-0 sudo[224106]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:31 compute-0 ceph-mon[75840]: pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:31 compute-0 sudo[224229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgpfflipxxogxlremgvqrzksvbtxdvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790090.3166318-1415-68787885280429/AnsiballZ_copy.py'
Nov 22 05:41:31 compute-0 sudo[224229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:31 compute-0 python3.9[224231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790090.3166318-1415-68787885280429/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:41:31 compute-0 sudo[224229]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:32 compute-0 sudo[224381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfgfrculjuhvlcqzpkwozftalrdgprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790091.784457-1430-15216445124738/AnsiballZ_systemd.py'
Nov 22 05:41:32 compute-0 sudo[224381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:32 compute-0 python3.9[224383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:41:32 compute-0 systemd[1]: Reloading.
Nov 22 05:41:32 compute-0 systemd-sysv-generator[224413]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:41:32 compute-0 systemd-rc-local-generator[224410]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:41:32 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 22 05:41:32 compute-0 sudo[224381]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:32 compute-0 podman[224421]: 2025-11-22 05:41:32.934568633 +0000 UTC m=+0.147404094 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 05:41:33 compute-0 ceph-mon[75840]: pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:33 compute-0 sudo[224599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opsmfkpwnzjifpombxcaismvraclxznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790093.1086173-1438-175123299716899/AnsiballZ_systemd.py'
Nov 22 05:41:33 compute-0 sudo[224599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:33 compute-0 python3.9[224601]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 05:41:33 compute-0 systemd[1]: Reloading.
Nov 22 05:41:33 compute-0 systemd-sysv-generator[224627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:41:33 compute-0 systemd-rc-local-generator[224619]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:41:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:34 compute-0 systemd[1]: Reloading.
Nov 22 05:41:34 compute-0 systemd-rc-local-generator[224666]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:41:34 compute-0 systemd-sysv-generator[224669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:41:34 compute-0 sudo[224599]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:34 compute-0 sshd-session[164857]: Connection closed by 192.168.122.30 port 46772
Nov 22 05:41:34 compute-0 sshd-session[164854]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:41:34 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 22 05:41:34 compute-0 systemd[1]: session-48.scope: Consumed 3min 58.861s CPU time.
Nov 22 05:41:34 compute-0 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Nov 22 05:41:35 compute-0 systemd-logind[798]: Removed session 48.
Nov 22 05:41:35 compute-0 ceph-mon[75840]: pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.904 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:41:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:41:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:41:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:41:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:37 compute-0 ceph-mon[75840]: pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:39 compute-0 ceph-mon[75840]: pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:40 compute-0 sshd-session[224698]: Accepted publickey for zuul from 192.168.122.30 port 55190 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:41:40 compute-0 systemd-logind[798]: New session 49 of user zuul.
Nov 22 05:41:40 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 22 05:41:40 compute-0 sshd-session[224698]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:41:41 compute-0 python3.9[224851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:41:41 compute-0 ceph-mon[75840]: pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:42 compute-0 podman[224899]: 2025-11-22 05:41:42.231039227 +0000 UTC m=+0.080233166 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 05:41:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:42 compute-0 python3.9[225024]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:41:43 compute-0 network[225041]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:41:43 compute-0 network[225042]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:41:43 compute-0 network[225043]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:41:43 compute-0 ceph-mon[75840]: pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:41:43
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:41:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:41:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:45 compute-0 ceph-mon[75840]: pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:47 compute-0 ceph-mon[75840]: pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:49 compute-0 ceph-mon[75840]: pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:49 compute-0 sudo[225313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njptweryrhumrpkznfihgtmpetcukhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790109.4068403-47-22300877377325/AnsiballZ_setup.py'
Nov 22 05:41:49 compute-0 sudo[225313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:50 compute-0 python3.9[225315]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 05:41:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:50 compute-0 sudo[225313]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:51 compute-0 sudo[225398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpinvptfrvxtesplyzlviuzgfvlpromi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790109.4068403-47-22300877377325/AnsiballZ_dnf.py'
Nov 22 05:41:51 compute-0 sudo[225398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:51 compute-0 ceph-mon[75840]: pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:51 compute-0 python3.9[225400]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:41:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:41:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:41:53 compute-0 ceph-mon[75840]: pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:55 compute-0 ceph-mon[75840]: pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:56 compute-0 sudo[225398]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:57 compute-0 sudo[225551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgkdfalkrfutotdjompjomzltqghumht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790116.6864972-59-176725319641110/AnsiballZ_stat.py'
Nov 22 05:41:57 compute-0 sudo[225551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.309937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117309996, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1905, "num_deletes": 250, "total_data_size": 3251414, "memory_usage": 3291944, "flush_reason": "Manual Compaction"}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117325902, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1821187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11763, "largest_seqno": 13667, "table_properties": {"data_size": 1815071, "index_size": 3127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15249, "raw_average_key_size": 20, "raw_value_size": 1801540, "raw_average_value_size": 2376, "num_data_blocks": 145, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789896, "oldest_key_time": 1763789896, "file_creation_time": 1763790117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16027 microseconds, and 8498 cpu microseconds.
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.325962) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1821187 bytes OK
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.325986) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328137) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328157) EVENT_LOG_v1 {"time_micros": 1763790117328150, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.328179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3243421, prev total WAL file size 3243421, number of live WAL files 2.
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.329707) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1778KB)], [29(7606KB)]
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117329784, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9610390, "oldest_snapshot_seqno": -1}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4053 keys, 7690750 bytes, temperature: kUnknown
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117392188, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7690750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7661728, "index_size": 17776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96309, "raw_average_key_size": 23, "raw_value_size": 7586801, "raw_average_value_size": 1871, "num_data_blocks": 773, "num_entries": 4053, "num_filter_entries": 4053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790117, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.392513) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7690750 bytes
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.394346) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.4 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.2) OK, records in: 4460, records dropped: 407 output_compression: NoCompression
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.394376) EVENT_LOG_v1 {"time_micros": 1763790117394361, "job": 12, "event": "compaction_finished", "compaction_time_micros": 62488, "compaction_time_cpu_micros": 32347, "output_level": 6, "num_output_files": 1, "total_output_size": 7690750, "num_input_records": 4460, "num_output_records": 4053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117395071, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790117397435, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.329589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:41:57.397532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:41:57 compute-0 python3.9[225553]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:41:57 compute-0 sudo[225551]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:57 compute-0 ceph-mon[75840]: pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:58 compute-0 sudo[225703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpstohbsmxtufojpfojrchmofhxrysvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790117.7904606-69-82468802401702/AnsiballZ_command.py'
Nov 22 05:41:58 compute-0 sudo[225703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:58 compute-0 python3.9[225705]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:41:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:41:58 compute-0 sudo[225703]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:59 compute-0 sudo[225856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdwehifuafgkmsmyahjhqawjaazdwxjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790118.8073318-79-169034752292261/AnsiballZ_stat.py'
Nov 22 05:41:59 compute-0 sudo[225856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:41:59 compute-0 python3.9[225858]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:41:59 compute-0 sudo[225856]: pam_unix(sudo:session): session closed for user root
Nov 22 05:41:59 compute-0 ceph-mon[75840]: pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:00 compute-0 sudo[226008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlzhxajwteammssruugmvuyhkurkilmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790119.6602206-87-248430058062474/AnsiballZ_command.py'
Nov 22 05:42:00 compute-0 sudo[226008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:00 compute-0 python3.9[226010]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:42:00 compute-0 sudo[226008]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:00 compute-0 sudo[226161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qblemwatchgyvmsftfouruawhcjmvglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790120.5513635-95-120336329415970/AnsiballZ_stat.py'
Nov 22 05:42:00 compute-0 sudo[226161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:01 compute-0 python3.9[226163]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:01 compute-0 sudo[226161]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:01 compute-0 ceph-mon[75840]: pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:01 compute-0 sudo[226284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asllbmwfhigkyibjrcjsttoiysbvqgff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790120.5513635-95-120336329415970/AnsiballZ_copy.py'
Nov 22 05:42:01 compute-0 sudo[226284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:02 compute-0 python3.9[226286]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790120.5513635-95-120336329415970/.source.iscsi _original_basename=.ltc7p64w follow=False checksum=cb9ad46cd98a71044757bb18980699a4118db1db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:02 compute-0 sudo[226284]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:02 compute-0 sudo[226436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuufxxxgvegnlewfyyvalcjumnvyfbcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790122.3197732-110-144761437898525/AnsiballZ_file.py'
Nov 22 05:42:02 compute-0 sudo[226436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:03 compute-0 python3.9[226438]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:03 compute-0 sudo[226436]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:03 compute-0 podman[226439]: 2025-11-22 05:42:03.27177979 +0000 UTC m=+0.124331593 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:42:03 compute-0 ceph-mon[75840]: pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:03 compute-0 sudo[226615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvognhjozfrgsihwjnhojilsbkdrxnlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790123.3356953-118-155967407058103/AnsiballZ_lineinfile.py'
Nov 22 05:42:03 compute-0 sudo[226615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:04 compute-0 python3.9[226617]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:04 compute-0 sudo[226615]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:05 compute-0 sudo[226767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziuejicvpabrtjmxyhoxeewvxmrugyno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790124.4430525-127-23634277243030/AnsiballZ_systemd_service.py'
Nov 22 05:42:05 compute-0 sudo[226767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:05 compute-0 python3.9[226769]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:42:05 compute-0 ceph-mon[75840]: pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:06 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 22 05:42:06 compute-0 sudo[226767]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:07 compute-0 sudo[226923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxloqsecokcjbuyzhbtyidgfotpqncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790126.8374536-135-114759985632701/AnsiballZ_systemd_service.py'
Nov 22 05:42:07 compute-0 sudo[226923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:07 compute-0 ceph-mon[75840]: pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:07 compute-0 python3.9[226925]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:42:07 compute-0 systemd[1]: Reloading.
Nov 22 05:42:07 compute-0 systemd-rc-local-generator[226956]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:42:07 compute-0 systemd-sysv-generator[226963]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:42:08 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 05:42:08 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 05:42:08 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 22 05:42:08 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 05:42:08 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 22 05:42:08 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 22 05:42:08 compute-0 sudo[226923]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:09 compute-0 sudo[227125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijhpuradhvidgrlxhnodyphzrvcehfcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790128.719522-146-115155815776750/AnsiballZ_service_facts.py'
Nov 22 05:42:09 compute-0 sudo[227125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:09 compute-0 python3.9[227127]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:42:09 compute-0 network[227144]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:42:09 compute-0 network[227145]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:42:09 compute-0 network[227146]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:42:09 compute-0 ceph-mon[75840]: pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:11 compute-0 ceph-mon[75840]: pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:12 compute-0 podman[227217]: 2025-11-22 05:42:12.377344582 +0000 UTC m=+0.089246214 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 22 05:42:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:12 compute-0 sudo[227240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:12 compute-0 sudo[227240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:12 compute-0 sudo[227240]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:12 compute-0 sudo[227268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:42:12 compute-0 sudo[227268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:12 compute-0 sudo[227268]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:12 compute-0 sudo[227296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:12 compute-0 sudo[227296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:12 compute-0 sudo[227296]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:12 compute-0 sudo[227324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:42:12 compute-0 sudo[227324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:13 compute-0 sudo[227324]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5e53ff88-b331-4730-b8bd-1b0de6091f66 does not exist
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 55a6836b-6cf4-4742-8fa6-078f29e394bd does not exist
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 79bd95fc-17f9-4571-a2d8-ebfa0a6b6cb0 does not exist
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:42:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:42:13 compute-0 sudo[227406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:13 compute-0 sudo[227406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:13 compute-0 sudo[227406]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:13 compute-0 sudo[227434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:42:13 compute-0 sudo[227434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:13 compute-0 sudo[227434]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:13 compute-0 sudo[227462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:13 compute-0 sudo[227462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:13 compute-0 sudo[227462]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:13 compute-0 sudo[227125]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:13 compute-0 sudo[227490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:42:13 compute-0 sudo[227490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:13 compute-0 ceph-mon[75840]: pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.058397811 +0000 UTC m=+0.072362056 container create db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:42:14 compute-0 systemd[1]: Started libpod-conmon-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope.
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.026750834 +0000 UTC m=+0.040715179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.152574786 +0000 UTC m=+0.166539061 container init db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.164631654 +0000 UTC m=+0.178595949 container start db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.168751033 +0000 UTC m=+0.182715328 container attach db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:14 compute-0 youthful_torvalds[227667]: 167 167
Nov 22 05:42:14 compute-0 systemd[1]: libpod-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope: Deactivated successfully.
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.173708165 +0000 UTC m=+0.187672490 container died db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:42:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8edc9258270854038842eec02ecad466bcb8d59e58cd9d70ebde449de837d0d-merged.mount: Deactivated successfully.
Nov 22 05:42:14 compute-0 podman[227608]: 2025-11-22 05:42:14.226139153 +0000 UTC m=+0.240103448 container remove db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:42:14 compute-0 systemd[1]: libpod-conmon-db649343a30e157e4c14f71e42ffd86087cd2427df8136ff29a07b2784d3df72.scope: Deactivated successfully.
Nov 22 05:42:14 compute-0 sudo[227741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-himxcucljzkqlvxljsingcxtycdiuwmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790133.9393272-156-206402910427240/AnsiballZ_file.py'
Nov 22 05:42:14 compute-0 sudo[227741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:14 compute-0 podman[227749]: 2025-11-22 05:42:14.474018606 +0000 UTC m=+0.075322295 container create 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 05:42:14 compute-0 systemd[1]: Started libpod-conmon-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope.
Nov 22 05:42:14 compute-0 podman[227749]: 2025-11-22 05:42:14.444868134 +0000 UTC m=+0.046171863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:14 compute-0 python3.9[227743]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 05:42:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:14 compute-0 sudo[227741]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:14 compute-0 podman[227749]: 2025-11-22 05:42:14.599797306 +0000 UTC m=+0.201101065 container init 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:42:14 compute-0 podman[227749]: 2025-11-22 05:42:14.607749567 +0000 UTC m=+0.209053266 container start 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 05:42:14 compute-0 podman[227749]: 2025-11-22 05:42:14.616207711 +0000 UTC m=+0.217511400 container attach 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:42:15 compute-0 sudo[227928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzgtkozvkgecvhciqgccexahillrcog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790134.8307445-164-83369531910437/AnsiballZ_modprobe.py'
Nov 22 05:42:15 compute-0 sudo[227928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:15 compute-0 ceph-mon[75840]: pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:15 compute-0 python3.9[227932]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 22 05:42:15 compute-0 sudo[227928]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:15 compute-0 bold_aryabhata[227766]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:42:15 compute-0 bold_aryabhata[227766]: --> relative data size: 1.0
Nov 22 05:42:15 compute-0 bold_aryabhata[227766]: --> All data devices are unavailable
Nov 22 05:42:15 compute-0 systemd[1]: libpod-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Deactivated successfully.
Nov 22 05:42:15 compute-0 systemd[1]: libpod-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Consumed 1.095s CPU time.
Nov 22 05:42:15 compute-0 podman[227749]: 2025-11-22 05:42:15.763866858 +0000 UTC m=+1.365170557 container died 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:42:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3720302a741f694d5654a5e984f766ac37d86c4c99b38b0cfef01f7115a32696-merged.mount: Deactivated successfully.
Nov 22 05:42:15 compute-0 podman[227749]: 2025-11-22 05:42:15.854177539 +0000 UTC m=+1.455481218 container remove 92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:42:15 compute-0 systemd[1]: libpod-conmon-92de5d121a996a1fbb879fb7168cbe5dfbc4e93a2f06cb1f9222f608468e7de6.scope: Deactivated successfully.
Nov 22 05:42:15 compute-0 sudo[227490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:15 compute-0 sudo[228012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:15 compute-0 sudo[228012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:15 compute-0 sudo[228012]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:16 compute-0 sudo[228066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:42:16 compute-0 sudo[228066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:16 compute-0 sudo[228066]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:16 compute-0 sudo[228114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:16 compute-0 sudo[228114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:16 compute-0 sudo[228114]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:16 compute-0 sudo[228163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:42:16 compute-0 sudo[228163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:16 compute-0 sudo[228212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcoqzyfunkurfspbubpnkyxnzhjkmbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790135.904878-172-96704401735867/AnsiballZ_stat.py'
Nov 22 05:42:16 compute-0 sudo[228212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:16 compute-0 python3.9[228216]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:16 compute-0 sudo[228212]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.717325663 +0000 UTC m=+0.070888198 container create 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:16 compute-0 systemd[1]: Started libpod-conmon-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope.
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.687545954 +0000 UTC m=+0.041108539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.828069105 +0000 UTC m=+0.181631710 container init 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.840764381 +0000 UTC m=+0.194326916 container start 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.844910401 +0000 UTC m=+0.198472946 container attach 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:42:16 compute-0 happy_tu[228344]: 167 167
Nov 22 05:42:16 compute-0 systemd[1]: libpod-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope: Deactivated successfully.
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.849516283 +0000 UTC m=+0.203078858 container died 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 05:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ed2dfd930ede31d4ff5cd2497881540fe8167930b76fd8acaf39ad078da8b6d-merged.mount: Deactivated successfully.
Nov 22 05:42:16 compute-0 podman[228298]: 2025-11-22 05:42:16.914823682 +0000 UTC m=+0.268386217 container remove 9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:42:16 compute-0 systemd[1]: libpod-conmon-9af423bb594ea3c771beda31695f8db1be8905cef1a8ab4f19165e689c423f7a.scope: Deactivated successfully.
Nov 22 05:42:16 compute-0 sudo[228412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqcuuhdrugjkamvhbjvqhlyscxmqoceo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790135.904878-172-96704401735867/AnsiballZ_copy.py'
Nov 22 05:42:16 compute-0 sudo[228412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:17 compute-0 podman[228420]: 2025-11-22 05:42:17.152811913 +0000 UTC m=+0.073686491 container create 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:42:17 compute-0 python3.9[228414]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790135.904878-172-96704401735867/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:17 compute-0 podman[228420]: 2025-11-22 05:42:17.122402089 +0000 UTC m=+0.043276727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:17 compute-0 systemd[1]: Started libpod-conmon-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope.
Nov 22 05:42:17 compute-0 sudo[228412]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:17 compute-0 podman[228420]: 2025-11-22 05:42:17.326965435 +0000 UTC m=+0.247840023 container init 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:17 compute-0 podman[228420]: 2025-11-22 05:42:17.339263321 +0000 UTC m=+0.260137879 container start 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:42:17 compute-0 podman[228420]: 2025-11-22 05:42:17.342941958 +0000 UTC m=+0.263816516 container attach 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:42:17 compute-0 ceph-mon[75840]: pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:17 compute-0 sudo[228591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzljaswgrhvhkqhcxzevxpvckpkuvhdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790137.5599067-188-217169857170753/AnsiballZ_lineinfile.py'
Nov 22 05:42:17 compute-0 sudo[228591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]: {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     "0": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "devices": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "/dev/loop3"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             ],
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_name": "ceph_lv0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_size": "21470642176",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "name": "ceph_lv0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "tags": {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_name": "ceph",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.crush_device_class": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.encrypted": "0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_id": "0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.vdo": "0"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             },
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "vg_name": "ceph_vg0"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         }
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     ],
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     "1": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "devices": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "/dev/loop4"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             ],
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_name": "ceph_lv1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_size": "21470642176",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "name": "ceph_lv1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "tags": {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_name": "ceph",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.crush_device_class": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.encrypted": "0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_id": "1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.vdo": "0"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             },
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "vg_name": "ceph_vg1"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         }
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     ],
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     "2": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "devices": [
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "/dev/loop5"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             ],
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_name": "ceph_lv2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_size": "21470642176",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "name": "ceph_lv2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "tags": {
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.cluster_name": "ceph",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.crush_device_class": "",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.encrypted": "0",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osd_id": "2",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:                 "ceph.vdo": "0"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             },
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "type": "block",
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:             "vg_name": "ceph_vg2"
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:         }
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]:     ]
Nov 22 05:42:18 compute-0 wizardly_shtern[228437]: }
Nov 22 05:42:18 compute-0 python3.9[228593]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:18 compute-0 podman[228420]: 2025-11-22 05:42:18.173795387 +0000 UTC m=+1.094669935 container died 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:42:18 compute-0 systemd[1]: libpod-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope: Deactivated successfully.
Nov 22 05:42:18 compute-0 sudo[228591]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-055661ba6cc95eeb5ca0afb22bd5c960c4dce7e2935233ca3a6ced8732421861-merged.mount: Deactivated successfully.
Nov 22 05:42:18 compute-0 podman[228420]: 2025-11-22 05:42:18.226887462 +0000 UTC m=+1.147762010 container remove 2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:42:18 compute-0 systemd[1]: libpod-conmon-2f803d47f4ac8ff0d587ab94f5fbe2e3f5554abbea024a31f94c0b0c05723f26.scope: Deactivated successfully.
Nov 22 05:42:18 compute-0 sudo[228163]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:18 compute-0 sudo[228633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:18 compute-0 sudo[228633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:18 compute-0 sudo[228633]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:18 compute-0 sudo[228682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:42:18 compute-0 sudo[228682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:18 compute-0 sudo[228682]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:18 compute-0 sudo[228735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:18 compute-0 sudo[228735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:18 compute-0 sudo[228735]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:18 compute-0 sudo[228760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:42:18 compute-0 sudo[228760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:18 compute-0 podman[228855]: 2025-11-22 05:42:18.965118319 +0000 UTC m=+0.067763856 container create a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:42:19 compute-0 systemd[1]: Started libpod-conmon-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope.
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:18.936064809 +0000 UTC m=+0.038710436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:19 compute-0 sudo[228917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnzlelblbnlqkqgleasygvghzjhseblm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790138.358651-196-170029600592302/AnsiballZ_systemd.py'
Nov 22 05:42:19 compute-0 sudo[228917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:19.065672231 +0000 UTC m=+0.168317838 container init a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:19.077747871 +0000 UTC m=+0.180393438 container start a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:19.081947392 +0000 UTC m=+0.184592989 container attach a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:42:19 compute-0 kind_pare[228915]: 167 167
Nov 22 05:42:19 compute-0 systemd[1]: libpod-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope: Deactivated successfully.
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:19.085737842 +0000 UTC m=+0.188383409 container died a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d05929e956d110c9a5ac31ba218a20a12e6e632dbcc6b097f9b30b26d06ad56-merged.mount: Deactivated successfully.
Nov 22 05:42:19 compute-0 podman[228855]: 2025-11-22 05:42:19.142513865 +0000 UTC m=+0.245159422 container remove a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_pare, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:42:19 compute-0 systemd[1]: libpod-conmon-a1f06280c1bdf2ff187d59cb148f8b48c1f063e1c6bf5706f6821e8c811f82fe.scope: Deactivated successfully.
Nov 22 05:42:19 compute-0 podman[228942]: 2025-11-22 05:42:19.390309516 +0000 UTC m=+0.074462392 container create 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:42:19 compute-0 python3.9[228920]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:42:19 compute-0 systemd[1]: Started libpod-conmon-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope.
Nov 22 05:42:19 compute-0 podman[228942]: 2025-11-22 05:42:19.356871942 +0000 UTC m=+0.041024908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:42:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:19 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 05:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:42:19 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 05:42:19 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 05:42:19 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 05:42:19 compute-0 podman[228942]: 2025-11-22 05:42:19.496632941 +0000 UTC m=+0.180785887 container init 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:42:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 05:42:19 compute-0 podman[228942]: 2025-11-22 05:42:19.509609826 +0000 UTC m=+0.193762732 container start 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:42:19 compute-0 podman[228942]: 2025-11-22 05:42:19.513732395 +0000 UTC m=+0.197885301 container attach 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:42:19 compute-0 sudo[228917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:19 compute-0 ceph-mon[75840]: pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:20 compute-0 sudo[229120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-venadkmeyuaicaptfketjatebaafgvrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790139.832489-204-230150208647145/AnsiballZ_file.py'
Nov 22 05:42:20 compute-0 sudo[229120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:20 compute-0 python3.9[229122]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:20 compute-0 sudo[229120]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:20 compute-0 gifted_darwin[228961]: {
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_id": 1,
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "type": "bluestore"
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     },
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_id": 2,
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "type": "bluestore"
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     },
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_id": 0,
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:         "type": "bluestore"
Nov 22 05:42:20 compute-0 gifted_darwin[228961]:     }
Nov 22 05:42:20 compute-0 gifted_darwin[228961]: }
Nov 22 05:42:20 compute-0 systemd[1]: libpod-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Deactivated successfully.
Nov 22 05:42:20 compute-0 systemd[1]: libpod-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Consumed 1.111s CPU time.
Nov 22 05:42:20 compute-0 podman[228942]: 2025-11-22 05:42:20.622119971 +0000 UTC m=+1.306272847 container died 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-50a1339417541b57143fa5c9108d220ced3ddd9ac1be77c6cf03919300441c66-merged.mount: Deactivated successfully.
Nov 22 05:42:20 compute-0 podman[228942]: 2025-11-22 05:42:20.713665846 +0000 UTC m=+1.397818732 container remove 87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 22 05:42:20 compute-0 systemd[1]: libpod-conmon-87c4e4006bef9c0e200d1c1e87611f3d31a595b5f2f917f683861e167fca56e9.scope: Deactivated successfully.
Nov 22 05:42:20 compute-0 sudo[228760]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:42:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:42:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:20 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 39b2ba35-ef84-45fd-ab26-8e8fd0f46121 does not exist
Nov 22 05:42:20 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 312ca808-3d91-4437-b0a6-b1cfa570b27f does not exist
Nov 22 05:42:20 compute-0 sudo[229240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:42:20 compute-0 sudo[229240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:20 compute-0 sudo[229240]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:20 compute-0 sudo[229287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:42:20 compute-0 sudo[229287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:42:20 compute-0 sudo[229287]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:21 compute-0 sudo[229362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jflrastvfrnfahlrfdikcyjovtvnppai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790140.668088-213-72581721024722/AnsiballZ_stat.py'
Nov 22 05:42:21 compute-0 sudo[229362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:21 compute-0 python3.9[229364]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:42:21 compute-0 sudo[229362]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:21 compute-0 ceph-mon[75840]: pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:42:21 compute-0 sudo[229514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cigcfjrkycrplstgsiseqliffxylwlkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790141.6050494-222-114999196143591/AnsiballZ_stat.py'
Nov 22 05:42:21 compute-0 sudo[229514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:22 compute-0 python3.9[229516]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:42:22 compute-0 sudo[229514]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:22 compute-0 sudo[229666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pijnukcrlhlnwausivwvlrgtzjwidotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790142.4280372-230-123302246964734/AnsiballZ_stat.py'
Nov 22 05:42:22 compute-0 sudo[229666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:23 compute-0 python3.9[229668]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:23 compute-0 sudo[229666]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:23 compute-0 sudo[229789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfckryorpzrspngiwqxdddgafjtyayjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790142.4280372-230-123302246964734/AnsiballZ_copy.py'
Nov 22 05:42:23 compute-0 sudo[229789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:23 compute-0 ceph-mon[75840]: pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:23 compute-0 python3.9[229791]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790142.4280372-230-123302246964734/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:23 compute-0 sudo[229789]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:24 compute-0 sudo[229941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvixzuuaiteecdnihuevpitpefimrkxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790143.9045777-245-87203300882248/AnsiballZ_command.py'
Nov 22 05:42:24 compute-0 sudo[229941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:24 compute-0 python3.9[229943]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:42:24 compute-0 sudo[229941]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:25 compute-0 sudo[230094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oawweplnowfdcmgaurcdeeyvivehuwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790144.8039258-253-93172273272744/AnsiballZ_lineinfile.py'
Nov 22 05:42:25 compute-0 sudo[230094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:25 compute-0 python3.9[230096]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:25 compute-0 sudo[230094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:25 compute-0 ceph-mon[75840]: pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:26 compute-0 sudo[230246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cornurwkwenkdejyamycagtadnwcvnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790145.6708634-261-154587959840908/AnsiballZ_replace.py'
Nov 22 05:42:26 compute-0 sudo[230246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:26 compute-0 python3.9[230248]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:26 compute-0 sudo[230246]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:27 compute-0 sudo[230398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbgdnveylfykjssgjvhpggslybrfywab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790146.7219129-269-29290584662389/AnsiballZ_replace.py'
Nov 22 05:42:27 compute-0 sudo[230398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:27 compute-0 python3.9[230400]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:27 compute-0 sudo[230398]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:27 compute-0 ceph-mon[75840]: pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:28 compute-0 sudo[230550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcrspjtvjvhuqsxgxqwarutmfuwsstyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790147.6444798-278-218802317724635/AnsiballZ_lineinfile.py'
Nov 22 05:42:28 compute-0 sudo[230550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:28 compute-0 python3.9[230552]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:28 compute-0 sudo[230550]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:28 compute-0 ceph-mon[75840]: pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:28 compute-0 sudo[230702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltohqwmkoykvqerkkhhfgajcvdikwotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790148.431853-278-166239112566967/AnsiballZ_lineinfile.py'
Nov 22 05:42:28 compute-0 sudo[230702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:29 compute-0 python3.9[230704]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:29 compute-0 sudo[230702]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:29 compute-0 sudo[230854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwwogjqqtzvwhmfxecwczlxajxntgkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790149.2334743-278-106564173016171/AnsiballZ_lineinfile.py'
Nov 22 05:42:29 compute-0 sudo[230854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:29 compute-0 python3.9[230856]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:29 compute-0 sudo[230854]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:30 compute-0 sudo[231006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dihcuzagphgefbynfyxivfqebipfgqhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790150.101919-278-41390826458005/AnsiballZ_lineinfile.py'
Nov 22 05:42:30 compute-0 sudo[231006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:30 compute-0 python3.9[231008]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:30 compute-0 sudo[231006]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:31 compute-0 sudo[231158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzchnmypmmuwqkufekfobstawzixfhqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790150.990196-307-59619650774645/AnsiballZ_stat.py'
Nov 22 05:42:31 compute-0 sudo[231158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:31 compute-0 ceph-mon[75840]: pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:31 compute-0 python3.9[231160]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:42:31 compute-0 sudo[231158]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:32 compute-0 sudo[231312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nizcpwsyarpkkvxwsqlrselkdenksdcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790151.866245-315-266422616364156/AnsiballZ_file.py'
Nov 22 05:42:32 compute-0 sudo[231312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:32 compute-0 python3.9[231314]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:32 compute-0 sudo[231312]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:33 compute-0 sudo[231464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogxwaakxdukwacfdjabjdiytcshcidi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790152.874929-324-183869418277385/AnsiballZ_file.py'
Nov 22 05:42:33 compute-0 sudo[231464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:33 compute-0 python3.9[231466]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:33 compute-0 sudo[231464]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:33 compute-0 ceph-mon[75840]: pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:34 compute-0 sudo[231631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhfyotfmpevlxpgskqferniakxeovasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790153.7404451-332-239900713654658/AnsiballZ_stat.py'
Nov 22 05:42:34 compute-0 sudo[231631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:34 compute-0 podman[231590]: 2025-11-22 05:42:34.191045611 +0000 UTC m=+0.127792464 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:42:34 compute-0 python3.9[231638]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:34 compute-0 sudo[231631]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:34 compute-0 sudo[231721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izciotcudhiaiuxitkxgizidhcvlojno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790153.7404451-332-239900713654658/AnsiballZ_file.py'
Nov 22 05:42:34 compute-0 sudo[231721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:34 compute-0 python3.9[231723]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:34 compute-0 sudo[231721]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:35 compute-0 sudo[231873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ursltqdscwusixtxurcdtsivzhqwjbua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790155.0835676-332-277559166068472/AnsiballZ_stat.py'
Nov 22 05:42:35 compute-0 sudo[231873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:35 compute-0 ceph-mon[75840]: pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:35 compute-0 python3.9[231875]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:35 compute-0 sudo[231873]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:36 compute-0 sudo[231951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiavctxvcvqkbyemrnffmzutyfczsmyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790155.0835676-332-277559166068472/AnsiballZ_file.py'
Nov 22 05:42:36 compute-0 sudo[231951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:36 compute-0 python3.9[231953]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:36 compute-0 sudo[231951]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:36 compute-0 sudo[232103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heqnoagdwzzegzsawcmhksalzzbaqgtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790156.4646823-355-38823872335063/AnsiballZ_file.py'
Nov 22 05:42:36 compute-0 sudo[232103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:42:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:42:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:42:36.905 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:42:37 compute-0 python3.9[232105]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:37 compute-0 sudo[232103]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:37 compute-0 ceph-mon[75840]: pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:37 compute-0 sudo[232255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkkpmveyhblioxymyvafhyvzgpwhauds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790157.3980565-363-138090690012668/AnsiballZ_stat.py'
Nov 22 05:42:37 compute-0 sudo[232255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:38 compute-0 python3.9[232257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:38 compute-0 sudo[232255]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:38 compute-0 sudo[232333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdwbmcyzhjunghxkfnnjpeumdxohitd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790157.3980565-363-138090690012668/AnsiballZ_file.py'
Nov 22 05:42:38 compute-0 sudo[232333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:38 compute-0 python3.9[232335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:38 compute-0 sudo[232333]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:39 compute-0 sudo[232485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiqhrbzyrwktfnpulvfizqhacwwqgjpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790158.8427112-375-64273096983038/AnsiballZ_stat.py'
Nov 22 05:42:39 compute-0 sudo[232485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:39 compute-0 python3.9[232487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:39 compute-0 sudo[232485]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:39 compute-0 ceph-mon[75840]: pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:39 compute-0 sudo[232563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiayhqqklfcmbzaarqthdmtwbqxwjzaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790158.8427112-375-64273096983038/AnsiballZ_file.py'
Nov 22 05:42:39 compute-0 sudo[232563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:40 compute-0 python3.9[232565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:40 compute-0 sudo[232563]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:40 compute-0 sudo[232715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvgcvxramyqedkrhnbodvuvvhypfpwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790160.2851932-387-230748253485312/AnsiballZ_systemd.py'
Nov 22 05:42:40 compute-0 sudo[232715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:41 compute-0 python3.9[232717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:42:41 compute-0 systemd[1]: Reloading.
Nov 22 05:42:41 compute-0 systemd-rc-local-generator[232745]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:42:41 compute-0 systemd-sysv-generator[232750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:42:41 compute-0 sudo[232715]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:41 compute-0 ceph-mon[75840]: pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:42 compute-0 sudo[232907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrmcsxayfslawxeftobqrkevmqqtcwxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790161.7020078-395-17143335637185/AnsiballZ_stat.py'
Nov 22 05:42:42 compute-0 sudo[232907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:42 compute-0 python3.9[232909]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:42 compute-0 sudo[232907]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:42 compute-0 sudo[232995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmtsjgaixlrkqggqkyayuouwatwiglth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790161.7020078-395-17143335637185/AnsiballZ_file.py'
Nov 22 05:42:42 compute-0 sudo[232995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:42 compute-0 podman[232959]: 2025-11-22 05:42:42.72278966 +0000 UTC m=+0.091191885 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 05:42:42 compute-0 sshd-session[232879]: Invalid user solana from 80.94.92.182 port 32810
Nov 22 05:42:42 compute-0 python3.9[233006]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:42 compute-0 sudo[232995]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:43 compute-0 ceph-mon[75840]: pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:43 compute-0 sudo[233157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktfcjucelhptflbwxcurmyqtlwjvlhyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790163.185906-407-241951980099426/AnsiballZ_stat.py'
Nov 22 05:42:43 compute-0 sudo[233157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:42:43
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta']
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:42:43 compute-0 sshd-session[232879]: Connection closed by invalid user solana 80.94.92.182 port 32810 [preauth]
Nov 22 05:42:43 compute-0 python3.9[233159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:42:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:42:43 compute-0 sudo[233157]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:44 compute-0 sudo[233235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvcugqdkzwfiuefvwowzwdmjgcthzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790163.185906-407-241951980099426/AnsiballZ_file.py'
Nov 22 05:42:44 compute-0 sudo[233235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:44 compute-0 python3.9[233237]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:44 compute-0 sudo[233235]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:45 compute-0 sudo[233387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbvdhcqolubcongavgwhgocrlozeyzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790164.6101253-419-45536337928637/AnsiballZ_systemd.py'
Nov 22 05:42:45 compute-0 sudo[233387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:45 compute-0 python3.9[233389]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:42:45 compute-0 systemd[1]: Reloading.
Nov 22 05:42:45 compute-0 systemd-rc-local-generator[233418]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:42:45 compute-0 systemd-sysv-generator[233422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:42:45 compute-0 ceph-mon[75840]: pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:45 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 05:42:45 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 05:42:45 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 05:42:45 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 05:42:45 compute-0 sudo[233387]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:46 compute-0 sudo[233581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uodkajyjxchajrhbvpcmdkbtczndplwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790166.2759259-429-219310791223352/AnsiballZ_file.py'
Nov 22 05:42:46 compute-0 sudo[233581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:46 compute-0 python3.9[233583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:46 compute-0 sudo[233581]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:47 compute-0 sudo[233733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftcqvrkrormclwyjlppcnqgxtubvdhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790167.18557-437-72927228338621/AnsiballZ_stat.py'
Nov 22 05:42:47 compute-0 sudo[233733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:47 compute-0 ceph-mon[75840]: pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:47 compute-0 python3.9[233735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:47 compute-0 sudo[233733]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:48 compute-0 sudo[233856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egibgwlgcwggpqvtmgtfndaggxnedfkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790167.18557-437-72927228338621/AnsiballZ_copy.py'
Nov 22 05:42:48 compute-0 sudo[233856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:48 compute-0 python3.9[233858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790167.18557-437-72927228338621/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:48 compute-0 sudo[233856]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:49 compute-0 sudo[234008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkoirexgzyaucrbrffcizwqzelbktzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790168.7700605-454-102895792760282/AnsiballZ_file.py'
Nov 22 05:42:49 compute-0 sudo[234008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:49 compute-0 python3.9[234010]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:42:49 compute-0 sudo[234008]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:49 compute-0 ceph-mon[75840]: pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:49 compute-0 sudo[234160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbcbmtlvzuovhjmbqfawuqvhrljjucek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790169.6121979-462-42850839516937/AnsiballZ_stat.py'
Nov 22 05:42:49 compute-0 sudo[234160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:50 compute-0 python3.9[234162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:42:50 compute-0 sudo[234160]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:50 compute-0 ceph-mon[75840]: pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:50 compute-0 sudo[234283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhwebbauykdvfdwtrhvjxvfmniauhhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790169.6121979-462-42850839516937/AnsiballZ_copy.py'
Nov 22 05:42:50 compute-0 sudo[234283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:51 compute-0 python3.9[234285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790169.6121979-462-42850839516937/.source.json _original_basename=.awdj5eot follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:51 compute-0 sudo[234283]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:51 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 22 05:42:51 compute-0 sudo[234436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toyegcdpewsniqrfbzeowpjepletrcvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790171.412856-477-200089002752378/AnsiballZ_file.py'
Nov 22 05:42:51 compute-0 sudo[234436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:51 compute-0 python3.9[234438]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:42:51 compute-0 sudo[234436]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:52 compute-0 sudo[234588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tohdrztbgfvcriptpaxwishiwmqazmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790172.279183-485-242792436696382/AnsiballZ_stat.py'
Nov 22 05:42:52 compute-0 sudo[234588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:52 compute-0 ceph-mon[75840]: pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:52 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 05:42:52 compute-0 sudo[234588]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:42:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:42:53 compute-0 sudo[234712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavwmrvmgksznfbfcytmlpwyqotcfzuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790172.279183-485-242792436696382/AnsiballZ_copy.py'
Nov 22 05:42:53 compute-0 sudo[234712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:53 compute-0 sudo[234712]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:54 compute-0 sudo[234864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcltligtanyjalqiozhwkqdvzwxluypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790173.8462834-502-152953904437855/AnsiballZ_container_config_data.py'
Nov 22 05:42:54 compute-0 sudo[234864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:54 compute-0 python3.9[234866]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 22 05:42:54 compute-0 sudo[234864]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:54 compute-0 ceph-mon[75840]: pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:55 compute-0 sudo[235016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiawprvhgtatalpaopupohlbwmmoxrdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790174.9015381-511-93107631023097/AnsiballZ_container_config_hash.py'
Nov 22 05:42:55 compute-0 sudo[235016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:55 compute-0 python3.9[235018]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 05:42:55 compute-0 sudo[235016]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:56 compute-0 sudo[235168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnosupeyfinnxyvvomgaccdkcwhfadmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790176.0641925-520-47763561280781/AnsiballZ_podman_container_info.py'
Nov 22 05:42:56 compute-0 sudo[235168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:56 compute-0 ceph-mon[75840]: pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:56 compute-0 python3.9[235170]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 05:42:57 compute-0 sudo[235168]: pam_unix(sudo:session): session closed for user root
Nov 22 05:42:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:42:58 compute-0 sudo[235347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgctazvwyfwtcaquqrpseamitivvywog ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763790177.8430047-533-72315315240179/AnsiballZ_edpm_container_manage.py'
Nov 22 05:42:58 compute-0 sudo[235347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:42:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:42:58 compute-0 python3[235349]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 05:42:58 compute-0 ceph-mon[75840]: pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:00 compute-0 podman[235362]: 2025-11-22 05:43:00.110625075 +0000 UTC m=+1.344840068 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 05:43:00 compute-0 podman[235418]: 2025-11-22 05:43:00.321943111 +0000 UTC m=+0.068840374 container create 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:43:00 compute-0 podman[235418]: 2025-11-22 05:43:00.289270296 +0000 UTC m=+0.036167609 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 05:43:00 compute-0 python3[235349]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 05:43:00 compute-0 sudo[235347]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:00 compute-0 ceph-mon[75840]: pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:01 compute-0 sudo[235607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxymlaafacutvivneshdgafaathlasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790180.7675006-541-147902448757163/AnsiballZ_stat.py'
Nov 22 05:43:01 compute-0 sudo[235607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:01 compute-0 python3.9[235609]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:43:01 compute-0 sudo[235607]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:02 compute-0 sudo[235761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oipyuazxrwdgrtjnszgwerbjjpedirpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790181.7543468-550-108351509376441/AnsiballZ_file.py'
Nov 22 05:43:02 compute-0 sudo[235761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:02 compute-0 python3.9[235763]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:02 compute-0 sudo[235761]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:02 compute-0 sudo[235837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lutxewdhrryriqiywcmezixreysbrwmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790181.7543468-550-108351509376441/AnsiballZ_stat.py'
Nov 22 05:43:02 compute-0 sudo[235837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:02 compute-0 ceph-mon[75840]: pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:02 compute-0 python3.9[235839]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:43:02 compute-0 sudo[235837]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:03 compute-0 sudo[235988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rintrfpcfdxpkvdmioztzcmdhkjosgvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790182.957054-550-250754807445925/AnsiballZ_copy.py'
Nov 22 05:43:03 compute-0 sudo[235988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:03 compute-0 python3.9[235990]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763790182.957054-550-250754807445925/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:03 compute-0 sudo[235988]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:04 compute-0 sudo[236064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkzdjdpzojcaywblsxbwfjuahslrkhar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790182.957054-550-250754807445925/AnsiballZ_systemd.py'
Nov 22 05:43:04 compute-0 sudo[236064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:04 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 05:43:04 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 05:43:04 compute-0 python3.9[236066]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:43:04 compute-0 systemd[1]: Reloading.
Nov 22 05:43:04 compute-0 podman[236068]: 2025-11-22 05:43:04.497217531 +0000 UTC m=+0.108170215 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 05:43:04 compute-0 systemd-rc-local-generator[236121]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:04 compute-0 systemd-sysv-generator[236124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:04 compute-0 ceph-mon[75840]: pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:04 compute-0 sudo[236064]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:05 compute-0 sudo[236202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekgnqmslvfhbbkeeurmjpylmftdhczum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790182.957054-550-250754807445925/AnsiballZ_systemd.py'
Nov 22 05:43:05 compute-0 sudo[236202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:05 compute-0 python3.9[236204]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:05 compute-0 systemd[1]: Reloading.
Nov 22 05:43:05 compute-0 systemd-sysv-generator[236234]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:05 compute-0 systemd-rc-local-generator[236230]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:05 compute-0 systemd[1]: Starting multipathd container...
Nov 22 05:43:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 05:43:06 compute-0 podman[236244]: 2025-11-22 05:43:06.166283113 +0000 UTC m=+0.154175953 container init 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 05:43:06 compute-0 multipathd[236259]: + sudo -E kolla_set_configs
Nov 22 05:43:06 compute-0 podman[236244]: 2025-11-22 05:43:06.198806255 +0000 UTC m=+0.186699105 container start 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 22 05:43:06 compute-0 podman[236244]: multipathd
Nov 22 05:43:06 compute-0 systemd[1]: Started multipathd container.
Nov 22 05:43:06 compute-0 sudo[236265]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 05:43:06 compute-0 sudo[236265]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 05:43:06 compute-0 sudo[236265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 05:43:06 compute-0 sudo[236202]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:06 compute-0 multipathd[236259]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:43:06 compute-0 multipathd[236259]: INFO:__main__:Validating config file
Nov 22 05:43:06 compute-0 multipathd[236259]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:43:06 compute-0 multipathd[236259]: INFO:__main__:Writing out command to execute
Nov 22 05:43:06 compute-0 sudo[236265]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:06 compute-0 multipathd[236259]: ++ cat /run_command
Nov 22 05:43:06 compute-0 multipathd[236259]: + CMD='/usr/sbin/multipathd -d'
Nov 22 05:43:06 compute-0 multipathd[236259]: + ARGS=
Nov 22 05:43:06 compute-0 multipathd[236259]: + sudo kolla_copy_cacerts
Nov 22 05:43:06 compute-0 podman[236266]: 2025-11-22 05:43:06.305345196 +0000 UTC m=+0.086286886 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:43:06 compute-0 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 05:43:06 compute-0 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.service: Failed with result 'exit-code'.
Nov 22 05:43:06 compute-0 sudo[236290]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 05:43:06 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:43:06 compute-0 multipathd[236259]: + [[ ! -n '' ]]
Nov 22 05:43:06 compute-0 sudo[236290]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 05:43:06 compute-0 multipathd[236259]: + . kolla_extend_start
Nov 22 05:43:06 compute-0 sudo[236290]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 05:43:06 compute-0 multipathd[236259]: Running command: '/usr/sbin/multipathd -d'
Nov 22 05:43:06 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:43:06 compute-0 multipathd[236259]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 05:43:06 compute-0 sudo[236290]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:06 compute-0 multipathd[236259]: + umask 0022
Nov 22 05:43:06 compute-0 multipathd[236259]: + exec /usr/sbin/multipathd -d
Nov 22 05:43:06 compute-0 multipathd[236259]: 3639.782810 | --------start up--------
Nov 22 05:43:06 compute-0 multipathd[236259]: 3639.782835 | read /etc/multipath.conf
Nov 22 05:43:06 compute-0 multipathd[236259]: 3639.791450 | path checkers start up
Nov 22 05:43:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:06 compute-0 ceph-mon[75840]: pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:07 compute-0 python3.9[236449]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:43:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:07 compute-0 sudo[236601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcbcpvmspyxwmgkiqfyhxzploljkjjbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790187.3529441-586-265136416325178/AnsiballZ_command.py'
Nov 22 05:43:07 compute-0 sudo[236601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:07 compute-0 python3.9[236603]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:07 compute-0 sudo[236601]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:08 compute-0 sudo[236766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwwbpowaeufmultaxabbjcvjfvwlmmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790188.1777534-594-74404590847632/AnsiballZ_systemd.py'
Nov 22 05:43:08 compute-0 sudo[236766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:08 compute-0 ceph-mon[75840]: pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:08 compute-0 python3.9[236768]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:43:08 compute-0 systemd[1]: Stopping multipathd container...
Nov 22 05:43:08 compute-0 multipathd[236259]: 3642.405785 | exit (signal)
Nov 22 05:43:08 compute-0 multipathd[236259]: 3642.406512 | --------shut down-------
Nov 22 05:43:09 compute-0 systemd[1]: libpod-90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.scope: Deactivated successfully.
Nov 22 05:43:09 compute-0 podman[236772]: 2025-11-22 05:43:09.014200368 +0000 UTC m=+0.095668963 container died 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 22 05:43:09 compute-0 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-17db237c4aa34bb3.timer: Deactivated successfully.
Nov 22 05:43:09 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 05:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-userdata-shm.mount: Deactivated successfully.
Nov 22 05:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2-merged.mount: Deactivated successfully.
Nov 22 05:43:09 compute-0 podman[236772]: 2025-11-22 05:43:09.263704226 +0000 UTC m=+0.345172821 container cleanup 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 05:43:09 compute-0 podman[236772]: multipathd
Nov 22 05:43:09 compute-0 podman[236799]: multipathd
Nov 22 05:43:09 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 05:43:09 compute-0 systemd[1]: Stopped multipathd container.
Nov 22 05:43:09 compute-0 systemd[1]: Starting multipathd container...
Nov 22 05:43:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a629278736ba56c45ca31c5788d67bb66c9a0458278c86c40640cb9ea7ef9d2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:09 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b.
Nov 22 05:43:09 compute-0 podman[236812]: 2025-11-22 05:43:09.552394969 +0000 UTC m=+0.160803749 container init 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:43:09 compute-0 multipathd[236828]: + sudo -E kolla_set_configs
Nov 22 05:43:09 compute-0 sudo[236834]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 05:43:09 compute-0 podman[236812]: 2025-11-22 05:43:09.5939922 +0000 UTC m=+0.202400920 container start 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 22 05:43:09 compute-0 sudo[236834]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 05:43:09 compute-0 sudo[236834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 05:43:09 compute-0 podman[236812]: multipathd
Nov 22 05:43:09 compute-0 systemd[1]: Started multipathd container.
Nov 22 05:43:09 compute-0 sudo[236766]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:09 compute-0 multipathd[236828]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:43:09 compute-0 multipathd[236828]: INFO:__main__:Validating config file
Nov 22 05:43:09 compute-0 multipathd[236828]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:43:09 compute-0 multipathd[236828]: INFO:__main__:Writing out command to execute
Nov 22 05:43:09 compute-0 sudo[236834]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:09 compute-0 multipathd[236828]: ++ cat /run_command
Nov 22 05:43:09 compute-0 multipathd[236828]: + CMD='/usr/sbin/multipathd -d'
Nov 22 05:43:09 compute-0 multipathd[236828]: + ARGS=
Nov 22 05:43:09 compute-0 multipathd[236828]: + sudo kolla_copy_cacerts
Nov 22 05:43:09 compute-0 sudo[236855]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 05:43:09 compute-0 sudo[236855]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 05:43:09 compute-0 sudo[236855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 05:43:09 compute-0 podman[236835]: 2025-11-22 05:43:09.702781921 +0000 UTC m=+0.090925319 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:43:09 compute-0 sudo[236855]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:09 compute-0 multipathd[236828]: + [[ ! -n '' ]]
Nov 22 05:43:09 compute-0 multipathd[236828]: + . kolla_extend_start
Nov 22 05:43:09 compute-0 multipathd[236828]: Running command: '/usr/sbin/multipathd -d'
Nov 22 05:43:09 compute-0 multipathd[236828]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 05:43:09 compute-0 multipathd[236828]: + umask 0022
Nov 22 05:43:09 compute-0 multipathd[236828]: + exec /usr/sbin/multipathd -d
Nov 22 05:43:09 compute-0 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-1ca5ff9852e0b879.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 05:43:09 compute-0 systemd[1]: 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b-1ca5ff9852e0b879.service: Failed with result 'exit-code'.
Nov 22 05:43:09 compute-0 multipathd[236828]: 3643.157664 | --------start up--------
Nov 22 05:43:09 compute-0 multipathd[236828]: 3643.157683 | read /etc/multipath.conf
Nov 22 05:43:09 compute-0 multipathd[236828]: 3643.164534 | path checkers start up
Nov 22 05:43:10 compute-0 sudo[237017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwgskpodpzedqcvaqpmervivsilckrvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790189.9029443-602-228389903168006/AnsiballZ_file.py'
Nov 22 05:43:10 compute-0 sudo[237017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:10 compute-0 python3.9[237019]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:10 compute-0 sudo[237017]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:10 compute-0 ceph-mon[75840]: pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:11 compute-0 sudo[237169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrrbrloxsgfupbziugajzsywgkpdximm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790190.9786615-614-30457627951022/AnsiballZ_file.py'
Nov 22 05:43:11 compute-0 sudo[237169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:11 compute-0 python3.9[237171]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 05:43:11 compute-0 sudo[237169]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:12 compute-0 sudo[237321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhalzgqrktsvfurmllpxnjzyvtsejomu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790191.8438725-622-132911815234334/AnsiballZ_modprobe.py'
Nov 22 05:43:12 compute-0 sudo[237321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:12 compute-0 python3.9[237323]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 05:43:12 compute-0 kernel: Key type psk registered
Nov 22 05:43:12 compute-0 sudo[237321]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:12 compute-0 ceph-mon[75840]: pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:13 compute-0 sudo[237492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxzlodobzzygczsbxknrrsabvdhcpqvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790192.797749-630-66181010933327/AnsiballZ_stat.py'
Nov 22 05:43:13 compute-0 sudo[237492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:13 compute-0 podman[237456]: 2025-11-22 05:43:13.240557612 +0000 UTC m=+0.084816257 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:43:13 compute-0 python3.9[237500]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:43:13 compute-0 sudo[237492]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:13 compute-0 sudo[237621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzpxwxsgrrahmmvhonfvdhoutslpdupb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790192.797749-630-66181010933327/AnsiballZ_copy.py'
Nov 22 05:43:13 compute-0 sudo[237621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:14 compute-0 python3.9[237623]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763790192.797749-630-66181010933327/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:14 compute-0 sudo[237621]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:14 compute-0 ceph-mon[75840]: pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:14 compute-0 sudo[237773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoxkprflydphkcyyfgqqdzvnbwczjzdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790194.4410846-646-133526872057257/AnsiballZ_lineinfile.py'
Nov 22 05:43:14 compute-0 sudo[237773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:14 compute-0 python3.9[237775]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:15 compute-0 sudo[237773]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:15 compute-0 sudo[237925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtcowpilxdcofpwpojhcelzrgwlprvpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790195.233142-654-139301026083462/AnsiballZ_systemd.py'
Nov 22 05:43:15 compute-0 sudo[237925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:15 compute-0 python3.9[237927]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:43:15 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 05:43:15 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 05:43:15 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 05:43:15 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 05:43:16 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 05:43:16 compute-0 sudo[237925]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:16 compute-0 sudo[238081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbukgpaawdtgngzhomnmcfjiakqaeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790196.3344874-662-76468837424807/AnsiballZ_dnf.py'
Nov 22 05:43:16 compute-0 sudo[238081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:16 compute-0 ceph-mon[75840]: pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:16 compute-0 python3.9[238083]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 05:43:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:18 compute-0 ceph-mon[75840]: pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:19 compute-0 systemd[1]: Reloading.
Nov 22 05:43:19 compute-0 systemd-sysv-generator[238117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:19 compute-0 systemd-rc-local-generator[238111]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:19 compute-0 systemd[1]: Reloading.
Nov 22 05:43:19 compute-0 systemd-sysv-generator[238154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:19 compute-0 systemd-rc-local-generator[238149]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:20 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 05:43:20 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 05:43:20 compute-0 lvm[238194]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 05:43:20 compute-0 lvm[238194]: VG ceph_vg2 finished
Nov 22 05:43:20 compute-0 lvm[238195]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 05:43:20 compute-0 lvm[238195]: VG ceph_vg0 finished
Nov 22 05:43:20 compute-0 lvm[238197]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 05:43:20 compute-0 lvm[238197]: VG ceph_vg1 finished
Nov 22 05:43:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 05:43:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 05:43:20 compute-0 systemd[1]: Reloading.
Nov 22 05:43:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:20 compute-0 systemd-sysv-generator[238255]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:20 compute-0 systemd-rc-local-generator[238252]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:20 compute-0 ceph-mon[75840]: pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:20 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 05:43:21 compute-0 sudo[238340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:21 compute-0 sudo[238340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[238340]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[238464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:43:21 compute-0 sudo[238464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[238464]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[238553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:21 compute-0 sudo[238553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[238553]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[238660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:43:21 compute-0 sudo[238660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[238081]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[238660]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:21 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f83840c3-6e87-4edd-9555-1050f34df778 does not exist
Nov 22 05:43:21 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f04eb052-a28e-4b2c-b06d-503efa967d52 does not exist
Nov 22 05:43:21 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 354c2b91-6ac4-4ea9-94c0-2abb13472046 does not exist
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:43:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:43:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:43:21 compute-0 sudo[239343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:21 compute-0 sudo[239343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[239343]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[239455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:43:21 compute-0 sudo[239455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[239455]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:21 compute-0 sudo[239547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:21 compute-0 sudo[239547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:21 compute-0 sudo[239547]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:22 compute-0 sudo[239652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:43:22 compute-0 sudo[239652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:22 compute-0 sudo[239759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvbzbisqmvjkwvyubgxmhmjfrqhycbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790201.7610378-670-217608257387815/AnsiballZ_systemd_service.py'
Nov 22 05:43:22 compute-0 sudo[239759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 05:43:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 05:43:22 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.803s CPU time.
Nov 22 05:43:22 compute-0 systemd[1]: run-r377decdc017046f98ac041f4fc40288c.service: Deactivated successfully.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.331017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202331136, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1160, "num_deletes": 505, "total_data_size": 1252798, "memory_usage": 1285744, "flush_reason": "Manual Compaction"}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202339051, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1240629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13668, "largest_seqno": 14827, "table_properties": {"data_size": 1235470, "index_size": 2171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 13526, "raw_average_key_size": 17, "raw_value_size": 1223175, "raw_average_value_size": 1613, "num_data_blocks": 99, "num_entries": 758, "num_filter_entries": 758, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790117, "oldest_key_time": 1763790117, "file_creation_time": 1763790202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 8145 microseconds, and 3448 cpu microseconds.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.339143) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1240629 bytes OK
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.339219) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341380) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341392) EVENT_LOG_v1 {"time_micros": 1763790202341388, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.341408) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1246381, prev total WAL file size 1246381, number of live WAL files 2.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.342124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1211KB)], [32(7510KB)]
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202342170, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8931379, "oldest_snapshot_seqno": -1}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3788 keys, 7016608 bytes, temperature: kUnknown
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202401122, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7016608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6989620, "index_size": 16446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92922, "raw_average_key_size": 24, "raw_value_size": 6919303, "raw_average_value_size": 1826, "num_data_blocks": 696, "num_entries": 3788, "num_filter_entries": 3788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.401432) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7016608 bytes
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.403169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.3 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.3 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(12.9) write-amplify(5.7) OK, records in: 4811, records dropped: 1023 output_compression: NoCompression
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.403201) EVENT_LOG_v1 {"time_micros": 1763790202403186, "job": 14, "event": "compaction_finished", "compaction_time_micros": 59032, "compaction_time_cpu_micros": 19000, "output_level": 6, "num_output_files": 1, "total_output_size": 7016608, "num_input_records": 4811, "num_output_records": 3788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202403711, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790202405886, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.342004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:43:22.405972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:43:22 compute-0 python3.9[239773]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.451177135 +0000 UTC m=+0.069755258 container create b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:43:22 compute-0 systemd[1]: Started libpod-conmon-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope.
Nov 22 05:43:22 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 22 05:43:22 compute-0 iscsid[226967]: iscsid shutting down.
Nov 22 05:43:22 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 05:43:22 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 22 05:43:22 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.424900979 +0000 UTC m=+0.043479112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:22 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 05:43:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:22 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.54992874 +0000 UTC m=+0.168506913 container init b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.561600879 +0000 UTC m=+0.180179002 container start b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.565461431 +0000 UTC m=+0.184039624 container attach b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:43:22 compute-0 naughty_goodall[239830]: 167 167
Nov 22 05:43:22 compute-0 systemd[1]: libpod-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope: Deactivated successfully.
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.571571563 +0000 UTC m=+0.190149696 container died b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:43:22 compute-0 sudo[239759]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-05e316cc3ea5a2086b85e1006796c397914adaac65b020ff75e10aae981bb836-merged.mount: Deactivated successfully.
Nov 22 05:43:22 compute-0 podman[239812]: 2025-11-22 05:43:22.630210625 +0000 UTC m=+0.248788758 container remove b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:43:22 compute-0 systemd[1]: libpod-conmon-b3fa2826435cd8a1cd685601d904ef7306af2f617c0a90a746fa4f83aa81fcdd.scope: Deactivated successfully.
Nov 22 05:43:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:22 compute-0 podman[239882]: 2025-11-22 05:43:22.874793782 +0000 UTC m=+0.060659358 container create 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:43:22 compute-0 systemd[1]: Started libpod-conmon-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope.
Nov 22 05:43:22 compute-0 podman[239882]: 2025-11-22 05:43:22.844069278 +0000 UTC m=+0.029934904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:22 compute-0 podman[239882]: 2025-11-22 05:43:22.994854421 +0000 UTC m=+0.180720047 container init 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:43:23 compute-0 podman[239882]: 2025-11-22 05:43:23.008284786 +0000 UTC m=+0.194150372 container start 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:43:23 compute-0 podman[239882]: 2025-11-22 05:43:23.013272158 +0000 UTC m=+0.199137734 container attach 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:43:23 compute-0 ceph-mon[75840]: pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:23 compute-0 python3.9[240025]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 05:43:24 compute-0 kind_carver[239946]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:43:24 compute-0 kind_carver[239946]: --> relative data size: 1.0
Nov 22 05:43:24 compute-0 kind_carver[239946]: --> All data devices are unavailable
Nov 22 05:43:24 compute-0 systemd[1]: libpod-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Deactivated successfully.
Nov 22 05:43:24 compute-0 podman[239882]: 2025-11-22 05:43:24.189047669 +0000 UTC m=+1.374913225 container died 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:43:24 compute-0 systemd[1]: libpod-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Consumed 1.122s CPU time.
Nov 22 05:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dfd04ed72ba52a9dcc81425ec58b8b8b7af02eb854029de2ac8e03b5aa9b3b7-merged.mount: Deactivated successfully.
Nov 22 05:43:24 compute-0 podman[239882]: 2025-11-22 05:43:24.255727265 +0000 UTC m=+1.441592821 container remove 51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:43:24 compute-0 systemd[1]: libpod-conmon-51e4b5d4d6b9f98004999849713b0cc4c4429d3db97323f2019f39c291638c68.scope: Deactivated successfully.
Nov 22 05:43:24 compute-0 sudo[239652]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:24 compute-0 sudo[240166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:24 compute-0 sudo[240166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:24 compute-0 sudo[240166]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:24 compute-0 sudo[240214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:43:24 compute-0 sudo[240214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:24 compute-0 sudo[240214]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:24 compute-0 sudo[240266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjsgxrccjlwrukmnpdeucbajryttdtca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790204.1153555-688-172306431833910/AnsiballZ_file.py'
Nov 22 05:43:24 compute-0 sudo[240266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:24 compute-0 sudo[240267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:24 compute-0 sudo[240267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:24 compute-0 sudo[240267]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:24 compute-0 sudo[240294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:43:24 compute-0 sudo[240294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:24 compute-0 python3.9[240275]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:24 compute-0 sudo[240266]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:24 compute-0 ceph-mon[75840]: pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:25 compute-0 podman[240384]: 2025-11-22 05:43:25.134436291 +0000 UTC m=+0.060867433 container create e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:43:25 compute-0 systemd[1]: Started libpod-conmon-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope.
Nov 22 05:43:25 compute-0 podman[240384]: 2025-11-22 05:43:25.107980081 +0000 UTC m=+0.034411283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:25 compute-0 podman[240384]: 2025-11-22 05:43:25.24806903 +0000 UTC m=+0.174500182 container init e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:43:25 compute-0 podman[240384]: 2025-11-22 05:43:25.260126029 +0000 UTC m=+0.186557181 container start e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:43:25 compute-0 podman[240384]: 2025-11-22 05:43:25.264749592 +0000 UTC m=+0.191180734 container attach e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 05:43:25 compute-0 hopeful_dubinsky[240424]: 167 167
Nov 22 05:43:25 compute-0 systemd[1]: libpod-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope: Deactivated successfully.
Nov 22 05:43:25 compute-0 podman[240452]: 2025-11-22 05:43:25.314086408 +0000 UTC m=+0.033117789 container died e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4682a9d945cb53f8f22a0f310a26fb42021d4193298ffab416588c1443d50ee2-merged.mount: Deactivated successfully.
Nov 22 05:43:25 compute-0 podman[240452]: 2025-11-22 05:43:25.352812373 +0000 UTC m=+0.071843764 container remove e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 05:43:25 compute-0 systemd[1]: libpod-conmon-e36cfbd67dd4562df6646219efbb4a11a8865d3ef8672b066f59263cb0c4ca67.scope: Deactivated successfully.
Nov 22 05:43:25 compute-0 podman[240527]: 2025-11-22 05:43:25.553717842 +0000 UTC m=+0.062654170 container create a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:43:25 compute-0 sudo[240567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpalshockrotibqngginnyhvyonebhou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790205.1972446-699-215526133597767/AnsiballZ_systemd_service.py'
Nov 22 05:43:25 compute-0 sudo[240567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:25 compute-0 systemd[1]: Started libpod-conmon-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope.
Nov 22 05:43:25 compute-0 podman[240527]: 2025-11-22 05:43:25.522562288 +0000 UTC m=+0.031498696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:25 compute-0 podman[240527]: 2025-11-22 05:43:25.664261199 +0000 UTC m=+0.173197557 container init a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:43:25 compute-0 podman[240527]: 2025-11-22 05:43:25.676548655 +0000 UTC m=+0.185485003 container start a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:43:25 compute-0 podman[240527]: 2025-11-22 05:43:25.699532243 +0000 UTC m=+0.208468641 container attach a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:43:25 compute-0 python3.9[240572]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:43:25 compute-0 systemd[1]: Reloading.
Nov 22 05:43:25 compute-0 systemd-sysv-generator[240606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:26 compute-0 systemd-rc-local-generator[240603]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:26 compute-0 sudo[240567]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:26 compute-0 fervent_haibt[240573]: {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     "0": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "devices": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "/dev/loop3"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             ],
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_name": "ceph_lv0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_size": "21470642176",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "name": "ceph_lv0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "tags": {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_name": "ceph",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.crush_device_class": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.encrypted": "0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_id": "0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.vdo": "0"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             },
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "vg_name": "ceph_vg0"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         }
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     ],
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     "1": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "devices": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "/dev/loop4"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             ],
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_name": "ceph_lv1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_size": "21470642176",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "name": "ceph_lv1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "tags": {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_name": "ceph",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.crush_device_class": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.encrypted": "0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_id": "1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.vdo": "0"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             },
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "vg_name": "ceph_vg1"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         }
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     ],
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     "2": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "devices": [
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "/dev/loop5"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             ],
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_name": "ceph_lv2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_size": "21470642176",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "name": "ceph_lv2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "tags": {
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.cluster_name": "ceph",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.crush_device_class": "",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.encrypted": "0",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osd_id": "2",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:                 "ceph.vdo": "0"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             },
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "type": "block",
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:             "vg_name": "ceph_vg2"
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:         }
Nov 22 05:43:26 compute-0 fervent_haibt[240573]:     ]
Nov 22 05:43:26 compute-0 fervent_haibt[240573]: }
Nov 22 05:43:26 compute-0 systemd[1]: libpod-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope: Deactivated successfully.
Nov 22 05:43:26 compute-0 podman[240527]: 2025-11-22 05:43:26.475400846 +0000 UTC m=+0.984337164 container died a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 22 05:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-90ae710fce7f43143dc2dddf26786cb1252406912f7f387fdbe22c767ae96ab1-merged.mount: Deactivated successfully.
Nov 22 05:43:26 compute-0 podman[240527]: 2025-11-22 05:43:26.541738563 +0000 UTC m=+1.050674891 container remove a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:43:26 compute-0 systemd[1]: libpod-conmon-a750e245cc3c41978a7b82fe4e5d49a47c6958324d7e727dc68fadb6be53eb76.scope: Deactivated successfully.
Nov 22 05:43:26 compute-0 sudo[240294]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:26 compute-0 sudo[240707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:26 compute-0 sudo[240707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:26 compute-0 sudo[240707]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:26 compute-0 ceph-mon[75840]: pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:26 compute-0 sudo[240764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:43:26 compute-0 sudo[240764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:26 compute-0 sudo[240764]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:26 compute-0 sudo[240825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:26 compute-0 sudo[240825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:26 compute-0 sudo[240825]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:26 compute-0 sudo[240853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:43:26 compute-0 sudo[240853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:26 compute-0 python3.9[240830]: ansible-ansible.builtin.service_facts Invoked
Nov 22 05:43:27 compute-0 network[240894]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 05:43:27 compute-0 network[240897]: 'network-scripts' will be removed from distribution in near future.
Nov 22 05:43:27 compute-0 network[240898]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 05:43:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:27 compute-0 podman[240941]: 2025-11-22 05:43:27.341208521 +0000 UTC m=+0.060257257 container create b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:43:27 compute-0 podman[240941]: 2025-11-22 05:43:27.310139138 +0000 UTC m=+0.029187934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:27 compute-0 systemd[1]: Started libpod-conmon-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope.
Nov 22 05:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:28 compute-0 podman[240941]: 2025-11-22 05:43:28.041616875 +0000 UTC m=+0.760665621 container init b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:43:28 compute-0 podman[240941]: 2025-11-22 05:43:28.049861584 +0000 UTC m=+0.768910290 container start b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:43:28 compute-0 podman[240941]: 2025-11-22 05:43:28.052929365 +0000 UTC m=+0.771978111 container attach b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:43:28 compute-0 gallant_ramanujan[240959]: 167 167
Nov 22 05:43:28 compute-0 podman[240941]: 2025-11-22 05:43:28.056040907 +0000 UTC m=+0.775089623 container died b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:43:28 compute-0 systemd[1]: libpod-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope: Deactivated successfully.
Nov 22 05:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a73aeeb86f222f0f35b59ed5d5db01699d820b94f17cb6fddd7a322f07ce3176-merged.mount: Deactivated successfully.
Nov 22 05:43:28 compute-0 podman[240941]: 2025-11-22 05:43:28.094037813 +0000 UTC m=+0.813086539 container remove b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:43:28 compute-0 systemd[1]: libpod-conmon-b40399aa6192e8868a4bf68ed027c42a68237e815fe2cb50985261fc4ec18ced.scope: Deactivated successfully.
Nov 22 05:43:28 compute-0 podman[240993]: 2025-11-22 05:43:28.316028571 +0000 UTC m=+0.071747300 container create d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:43:28 compute-0 systemd[1]: Started libpod-conmon-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope.
Nov 22 05:43:28 compute-0 podman[240993]: 2025-11-22 05:43:28.288536403 +0000 UTC m=+0.044255182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:43:28 compute-0 podman[240993]: 2025-11-22 05:43:28.41532563 +0000 UTC m=+0.171044329 container init d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:43:28 compute-0 podman[240993]: 2025-11-22 05:43:28.428014326 +0000 UTC m=+0.183733015 container start d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:43:28 compute-0 podman[240993]: 2025-11-22 05:43:28.431387605 +0000 UTC m=+0.187106284 container attach d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:43:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:28 compute-0 ceph-mon[75840]: pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:29 compute-0 elegant_austin[241015]: {
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_id": 1,
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "type": "bluestore"
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     },
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_id": 2,
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "type": "bluestore"
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     },
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_id": 0,
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:43:29 compute-0 elegant_austin[241015]:         "type": "bluestore"
Nov 22 05:43:29 compute-0 elegant_austin[241015]:     }
Nov 22 05:43:29 compute-0 elegant_austin[241015]: }
Nov 22 05:43:29 compute-0 systemd[1]: libpod-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Deactivated successfully.
Nov 22 05:43:29 compute-0 podman[240993]: 2025-11-22 05:43:29.462573879 +0000 UTC m=+1.218292608 container died d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:43:29 compute-0 systemd[1]: libpod-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Consumed 1.043s CPU time.
Nov 22 05:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-78fe6a0392a56a874997552c7c3f100edc4bc354efd382385cb3caf87ab1e96e-merged.mount: Deactivated successfully.
Nov 22 05:43:29 compute-0 podman[240993]: 2025-11-22 05:43:29.537440591 +0000 UTC m=+1.293159290 container remove d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_austin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:43:29 compute-0 systemd[1]: libpod-conmon-d6601f67081cad41ae80f6f59f8dc406573aba86e337af4e731d59efca545f75.scope: Deactivated successfully.
Nov 22 05:43:29 compute-0 sudo[240853]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:43:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:43:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:29 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d5082e7e-3cc2-4864-b924-a0720057f00b does not exist
Nov 22 05:43:29 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6941ae91-2600-4596-9a6d-ac275627ac8f does not exist
Nov 22 05:43:29 compute-0 sudo[241107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:43:29 compute-0 sudo[241107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:29 compute-0 sudo[241107]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:29 compute-0 sudo[241136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:43:29 compute-0 sudo[241136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:43:29 compute-0 sudo[241136]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:43:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:31 compute-0 ceph-mon[75840]: pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:31 compute-0 sudo[241361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncjrgihfoewiceyhjphxselrvtwdasvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790211.490979-718-72092133337199/AnsiballZ_systemd_service.py'
Nov 22 05:43:31 compute-0 sudo[241361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:32 compute-0 python3.9[241363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:32 compute-0 sudo[241361]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:32 compute-0 ceph-mon[75840]: pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:32 compute-0 sudo[241514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kownkbdwhiygacwtlqzglsnaeyeqkfkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790212.410322-718-200981390330696/AnsiballZ_systemd_service.py'
Nov 22 05:43:32 compute-0 sudo[241514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:33 compute-0 python3.9[241516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:33 compute-0 sudo[241514]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:33 compute-0 sudo[241669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icaqzporgckrmhhxxccprifsbxetewmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790213.3756835-718-128793726481276/AnsiballZ_systemd_service.py'
Nov 22 05:43:33 compute-0 sudo[241669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:33 compute-0 sshd-session[241536]: Invalid user solana from 80.94.92.166 port 33490
Nov 22 05:43:34 compute-0 sshd-session[241536]: Connection closed by invalid user solana 80.94.92.166 port 33490 [preauth]
Nov 22 05:43:34 compute-0 python3.9[241671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:34 compute-0 sudo[241669]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:34 compute-0 sudo[241822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltnglbmkcvtrzhyzxrvvrkiupghjegxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790214.2307484-718-121992424662722/AnsiballZ_systemd_service.py'
Nov 22 05:43:34 compute-0 sudo[241822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:34 compute-0 ceph-mon[75840]: pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:34 compute-0 python3.9[241824]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:35 compute-0 sudo[241822]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:35 compute-0 podman[241826]: 2025-11-22 05:43:35.120992999 +0000 UTC m=+0.122772551 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:43:35 compute-0 sudo[242001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxesxhxqifjoipodtvmlqdtwesndlzwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790215.1933556-718-69343162546902/AnsiballZ_systemd_service.py'
Nov 22 05:43:35 compute-0 sudo[242001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:35 compute-0 python3.9[242003]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:35 compute-0 sudo[242001]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:36 compute-0 sudo[242154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycypsiyklnyifzhqucnoltptxggqhev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790216.1049595-718-239213628220309/AnsiballZ_systemd_service.py'
Nov 22 05:43:36 compute-0 sudo[242154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:36 compute-0 ceph-mon[75840]: pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:36 compute-0 python3.9[242156]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:36 compute-0 sudo[242154]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:43:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:43:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:43:36.906 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:43:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:37 compute-0 sudo[242307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pylcojfpejmdmwwceuzgywvhlttbsmsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790216.9746199-718-234654772559022/AnsiballZ_systemd_service.py'
Nov 22 05:43:37 compute-0 sudo[242307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:37 compute-0 python3.9[242309]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:37 compute-0 sudo[242307]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:38 compute-0 sudo[242460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezbuntjoipeefdqknfkcyxxwvlbplfxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790217.929835-718-194636683180981/AnsiballZ_systemd_service.py'
Nov 22 05:43:38 compute-0 sudo[242460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:38 compute-0 python3.9[242462]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:43:38 compute-0 sudo[242460]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:38 compute-0 ceph-mon[75840]: pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:39 compute-0 sudo[242613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htytkzcsthwtkzexdbhdqmokxozywdho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790218.8887203-777-224511015505696/AnsiballZ_file.py'
Nov 22 05:43:39 compute-0 sudo[242613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:39 compute-0 python3.9[242615]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:39 compute-0 sudo[242613]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:39 compute-0 sudo[242782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhbtormqgzhsdsrsamnrkitnhziuxss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790219.6033247-777-137759691247053/AnsiballZ_file.py'
Nov 22 05:43:39 compute-0 sudo[242782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:39 compute-0 podman[242739]: 2025-11-22 05:43:39.981335508 +0000 UTC m=+0.087105028 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:43:40 compute-0 python3.9[242787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:40 compute-0 sudo[242782]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:40 compute-0 sudo[242937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khgathussienrvzrccnmoakmoxshodob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790220.3557541-777-235052131391041/AnsiballZ_file.py'
Nov 22 05:43:40 compute-0 sudo[242937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:40 compute-0 ceph-mon[75840]: pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:40 compute-0 python3.9[242939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:40 compute-0 sudo[242937]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:41 compute-0 sudo[243089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmxyvkczbfjpexmxrbwwfpruemzxnggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790221.0441701-777-113605674825549/AnsiballZ_file.py'
Nov 22 05:43:41 compute-0 sudo[243089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:41 compute-0 python3.9[243091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:41 compute-0 sudo[243089]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:42 compute-0 sudo[243241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oystugbnrrfqflmaluhjqyxazxedhfod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790221.791317-777-181636987283137/AnsiballZ_file.py'
Nov 22 05:43:42 compute-0 sudo[243241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:42 compute-0 python3.9[243243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:42 compute-0 sudo[243241]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:42 compute-0 ceph-mon[75840]: pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:42 compute-0 sudo[243393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmbcaqecitvqoavqvincuyiomvusqjsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790222.5975142-777-246133405798289/AnsiballZ_file.py'
Nov 22 05:43:42 compute-0 sudo[243393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:43 compute-0 python3.9[243395]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:43 compute-0 sudo[243393]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:43:43
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control']
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:43:43 compute-0 sudo[243557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smgqiqqrxfrcwqbjuyubrdmyliredlgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790223.3808165-777-201818319458702/AnsiballZ_file.py'
Nov 22 05:43:43 compute-0 sudo[243557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:43 compute-0 podman[243519]: 2025-11-22 05:43:43.827059922 +0000 UTC m=+0.094907844 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:43:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:43:44 compute-0 python3.9[243565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:44 compute-0 sudo[243557]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:44 compute-0 sudo[243715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poxanevarjlusymutncstwgzslcerinr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790224.265217-777-279467585308009/AnsiballZ_file.py'
Nov 22 05:43:44 compute-0 sudo[243715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:44 compute-0 ceph-mon[75840]: pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:44 compute-0 python3.9[243717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:44 compute-0 sudo[243715]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:45 compute-0 sudo[243867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdyuubrsiamkfjjaaoenzziinfzvwtcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790225.1579852-834-19721963552008/AnsiballZ_file.py'
Nov 22 05:43:45 compute-0 sudo[243867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:45 compute-0 python3.9[243869]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:45 compute-0 sudo[243867]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:46 compute-0 sudo[244019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhghhubnzowvvhwsopekjnkuvbxrtucl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790225.9027972-834-131826286104798/AnsiballZ_file.py'
Nov 22 05:43:46 compute-0 sudo[244019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:46 compute-0 python3.9[244021]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:46 compute-0 sudo[244019]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:46 compute-0 ceph-mon[75840]: pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:47 compute-0 sudo[244171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nihvcgiktgxxuxzxujiusirgpqwljgom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790226.6517785-834-17869780276712/AnsiballZ_file.py'
Nov 22 05:43:47 compute-0 sudo[244171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:47 compute-0 python3.9[244173]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:47 compute-0 sudo[244171]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:47 compute-0 sudo[244323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndkrwykwgdreqkoyixxkkocmmbdvckys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790227.41891-834-276098611267719/AnsiballZ_file.py'
Nov 22 05:43:47 compute-0 sudo[244323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:47 compute-0 python3.9[244325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:47 compute-0 sudo[244323]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:48 compute-0 sudo[244475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqzztskhhrgidktbtnyjumfiqqcopmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790228.122457-834-123928391474484/AnsiballZ_file.py'
Nov 22 05:43:48 compute-0 sudo[244475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:48 compute-0 python3.9[244477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:48 compute-0 sudo[244475]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:48 compute-0 ceph-mon[75840]: pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:49 compute-0 sudo[244627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nidcquaeuamwpjfaqlxquvdkfgcofoie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790228.7806814-834-177515042403780/AnsiballZ_file.py'
Nov 22 05:43:49 compute-0 sudo[244627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:49 compute-0 python3.9[244629]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:49 compute-0 sudo[244627]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:49 compute-0 sudo[244779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvmtfqpijiipcrveysgqrfnmozlxvkvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790229.521551-834-224527426016208/AnsiballZ_file.py'
Nov 22 05:43:49 compute-0 sudo[244779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:50 compute-0 python3.9[244781]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:50 compute-0 sudo[244779]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:50 compute-0 sudo[244931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkvhambjtwaozijwqtfalwjljgpagfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790230.3613758-834-43195976558175/AnsiballZ_file.py'
Nov 22 05:43:50 compute-0 sudo[244931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:50 compute-0 ceph-mon[75840]: pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:50 compute-0 python3.9[244933]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:43:50 compute-0 sudo[244931]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:51 compute-0 sudo[245083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pooqidpdltazuqvndpwpgihfrrmjhpdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790231.204572-892-170627644390630/AnsiballZ_command.py'
Nov 22 05:43:51 compute-0 sudo[245083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:51 compute-0 python3.9[245085]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:51 compute-0 sudo[245083]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:52 compute-0 ceph-mon[75840]: pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:52 compute-0 python3.9[245237]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:43:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:43:53 compute-0 sudo[245387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-majhgrffvvyjvxspqfvjujpkiychrofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790233.0980217-910-229997855253692/AnsiballZ_systemd_service.py'
Nov 22 05:43:53 compute-0 sudo[245387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:53 compute-0 python3.9[245389]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:43:53 compute-0 systemd[1]: Reloading.
Nov 22 05:43:53 compute-0 systemd-sysv-generator[245416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:43:53 compute-0 systemd-rc-local-generator[245412]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:43:54 compute-0 sudo[245387]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:54 compute-0 ceph-mon[75840]: pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:54 compute-0 sudo[245573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hovbqexvckrefpeohevlvgsxhbwkzlvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790234.4640074-918-188275380731397/AnsiballZ_command.py'
Nov 22 05:43:54 compute-0 sudo[245573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:55 compute-0 python3.9[245575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:55 compute-0 sudo[245573]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:55 compute-0 sudo[245726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twtgrzkqpcjziugpglnxhpemzzzfyxoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790235.259487-918-6375980229328/AnsiballZ_command.py'
Nov 22 05:43:55 compute-0 sudo[245726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:55 compute-0 python3.9[245728]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:55 compute-0 sudo[245726]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:56 compute-0 sudo[245879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkkjijkomudcayfaksldzrlbtgaryama ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790235.9978359-918-251928894397440/AnsiballZ_command.py'
Nov 22 05:43:56 compute-0 sudo[245879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:56 compute-0 python3.9[245881]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:56 compute-0 sudo[245879]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:56 compute-0 ceph-mon[75840]: pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:56 compute-0 sudo[246032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktcuweqyeuqzvuskdtluhrbupftwhwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790236.6802676-918-62198942064064/AnsiballZ_command.py'
Nov 22 05:43:56 compute-0 sudo[246032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:43:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3350 writes, 14K keys, 3350 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3350 writes, 3350 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5837 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.52 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    116.3      0.13              0.06         7    0.019       0      0       0.0       0.0
                                             L6      1/0    6.69 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.7    157.9    130.8      0.31              0.15         6    0.052     24K   3186       0.0       0.0
                                            Sum      1/0    6.69 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    111.3    126.5      0.44              0.21        13    0.034     24K   3186       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    119.8    120.2      0.28              0.13         8    0.035     17K   2458       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    157.9    130.8      0.31              0.15         6    0.052     24K   3186       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    125.3      0.12              0.06         6    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 308.00 MB usage: 1.53 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(98,1.31 MB,0.425136%) FilterBlock(14,74.67 KB,0.0236759%) IndexBlock(14,152.80 KB,0.0484467%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 05:43:57 compute-0 python3.9[246034]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:57 compute-0 sudo[246032]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:43:57 compute-0 sudo[246185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pswcalfwwohnuofrushaprqmjeuzhqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790237.419971-918-181386975088290/AnsiballZ_command.py'
Nov 22 05:43:57 compute-0 sudo[246185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:57 compute-0 python3.9[246187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:58 compute-0 sudo[246185]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:58 compute-0 sudo[246338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdhfdbhxsxgeausttvehddxcxrjwurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790238.1763775-918-225987485692436/AnsiballZ_command.py'
Nov 22 05:43:58 compute-0 sudo[246338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:58 compute-0 ceph-mon[75840]: pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:43:58 compute-0 python3.9[246340]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:58 compute-0 sudo[246338]: pam_unix(sudo:session): session closed for user root
Nov 22 05:43:59 compute-0 sudo[246491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jetjxruexhrjzphgmqywivxlyafkuxny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790239.0407832-918-214986311335155/AnsiballZ_command.py'
Nov 22 05:43:59 compute-0 sudo[246491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:43:59 compute-0 python3.9[246493]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:43:59 compute-0 sudo[246491]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:00 compute-0 sudo[246644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvrnwiocujwwjqnuatkszyhbowuhvsck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790239.8622413-918-146248561392881/AnsiballZ_command.py'
Nov 22 05:44:00 compute-0 sudo[246644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:00 compute-0 python3.9[246646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 05:44:00 compute-0 sudo[246644]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:00 compute-0 ceph-mon[75840]: pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:01 compute-0 sudo[246797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uihvgtnlhslcqkxcwlhjutysjyoredsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790241.4309475-997-47469043097995/AnsiballZ_file.py'
Nov 22 05:44:01 compute-0 sudo[246797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:02 compute-0 python3.9[246799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:02 compute-0 sudo[246797]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:02 compute-0 sudo[246949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdugajedolsijfsxmynyoivqcfpokrsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790242.2172225-997-24326847061931/AnsiballZ_file.py'
Nov 22 05:44:02 compute-0 sudo[246949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:02 compute-0 ceph-mon[75840]: pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:02 compute-0 python3.9[246951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:02 compute-0 sudo[246949]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:03 compute-0 sudo[247101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unnvbhlcovezjdwljxpaorsstygvaxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790242.9743977-997-203017823975822/AnsiballZ_file.py'
Nov 22 05:44:03 compute-0 sudo[247101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:03 compute-0 python3.9[247103]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:03 compute-0 sudo[247101]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:04 compute-0 sudo[247253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paoinyybcelxrmygvpxpkxxhucfbfmtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790243.825294-1019-182852900640073/AnsiballZ_file.py'
Nov 22 05:44:04 compute-0 sudo[247253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:04 compute-0 python3.9[247255]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:04 compute-0 sudo[247253]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:04 compute-0 ceph-mon[75840]: pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:04 compute-0 sudo[247405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhloyuwboynepridvnilrpvgllxhanwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790244.620581-1019-9178458306694/AnsiballZ_file.py'
Nov 22 05:44:04 compute-0 sudo[247405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:05 compute-0 python3.9[247407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:05 compute-0 sudo[247405]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:05 compute-0 podman[247408]: 2025-11-22 05:44:05.355678491 +0000 UTC m=+0.115486851 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:44:05 compute-0 sudo[247583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqmmvxvsvvpttykcaofikdceajzrmivx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790245.423556-1019-115509005817615/AnsiballZ_file.py'
Nov 22 05:44:05 compute-0 sudo[247583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:06 compute-0 python3.9[247585]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:06 compute-0 sudo[247583]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:06 compute-0 sudo[247735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbbxkguohsouqllvyabikeqwazptldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790246.2160559-1019-74383777220116/AnsiballZ_file.py'
Nov 22 05:44:06 compute-0 sudo[247735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:06 compute-0 ceph-mon[75840]: pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:06 compute-0 python3.9[247737]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:06 compute-0 sudo[247735]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:07 compute-0 sudo[247887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-envdemqxseqlofedgasrezczkvfxlyug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790246.9857416-1019-249054107355052/AnsiballZ_file.py'
Nov 22 05:44:07 compute-0 sudo[247887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:07 compute-0 python3.9[247889]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:07 compute-0 sudo[247887]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:08 compute-0 sudo[248039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlfvrezkynrebvtjvcaxrtzfgromcchj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790247.8247225-1019-88647466991194/AnsiballZ_file.py'
Nov 22 05:44:08 compute-0 sudo[248039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:08 compute-0 python3.9[248041]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:08 compute-0 sudo[248039]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:08 compute-0 ceph-mon[75840]: pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:08 compute-0 sudo[248191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqjbxwfctsgdychxrbmnoilxoernzgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790248.5691223-1019-127719908711821/AnsiballZ_file.py'
Nov 22 05:44:08 compute-0 sudo[248191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:09 compute-0 python3.9[248193]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:09 compute-0 sudo[248191]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:10 compute-0 podman[248218]: 2025-11-22 05:44:10.243824263 +0000 UTC m=+0.086024485 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 22 05:44:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:10 compute-0 ceph-mon[75840]: pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:12 compute-0 ceph-mon[75840]: pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:14 compute-0 podman[248240]: 2025-11-22 05:44:14.199573318 +0000 UTC m=+0.062668403 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 05:44:14 compute-0 sudo[248384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gikebkhwxjqlxmatqzyjczqgpziyylcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790254.1919248-1208-24406701923067/AnsiballZ_getent.py'
Nov 22 05:44:14 compute-0 sudo[248384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:14 compute-0 ceph-mon[75840]: pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:14 compute-0 python3.9[248386]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 05:44:14 compute-0 sudo[248384]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:15 compute-0 sudo[248537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgifwdsmxzdczwueehbsedjxzdzuwloo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790255.0954807-1216-159521558367197/AnsiballZ_group.py'
Nov 22 05:44:15 compute-0 sudo[248537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:15 compute-0 python3.9[248539]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 05:44:15 compute-0 groupadd[248540]: group added to /etc/group: name=nova, GID=42436
Nov 22 05:44:15 compute-0 groupadd[248540]: group added to /etc/gshadow: name=nova
Nov 22 05:44:15 compute-0 groupadd[248540]: new group: name=nova, GID=42436
Nov 22 05:44:15 compute-0 sudo[248537]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:16 compute-0 sudo[248695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijibgxqehgegdjhjirilkzzwsjwyrjio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790256.0855908-1224-6786162746882/AnsiballZ_user.py'
Nov 22 05:44:16 compute-0 sudo[248695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:16 compute-0 ceph-mon[75840]: pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:16 compute-0 python3.9[248697]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 05:44:16 compute-0 useradd[248699]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 22 05:44:16 compute-0 useradd[248699]: add 'nova' to group 'libvirt'
Nov 22 05:44:16 compute-0 useradd[248699]: add 'nova' to shadow group 'libvirt'
Nov 22 05:44:16 compute-0 sudo[248695]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:17 compute-0 sshd-session[248730]: Accepted publickey for zuul from 192.168.122.30 port 57742 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 05:44:17 compute-0 systemd-logind[798]: New session 50 of user zuul.
Nov 22 05:44:17 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 22 05:44:17 compute-0 sshd-session[248730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 05:44:18 compute-0 sshd-session[248733]: Received disconnect from 192.168.122.30 port 57742:11: disconnected by user
Nov 22 05:44:18 compute-0 sshd-session[248733]: Disconnected from user zuul 192.168.122.30 port 57742
Nov 22 05:44:18 compute-0 sshd-session[248730]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:44:18 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 22 05:44:18 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Nov 22 05:44:18 compute-0 systemd-logind[798]: Removed session 50.
Nov 22 05:44:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:18 compute-0 ceph-mon[75840]: pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:18 compute-0 python3.9[248883]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:19 compute-0 python3.9[249004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790258.2713304-1249-237980532808694/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:20 compute-0 python3.9[249154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:20 compute-0 python3.9[249230]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:20 compute-0 ceph-mon[75840]: pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:21 compute-0 python3.9[249380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:22 compute-0 python3.9[249501]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790260.9557655-1249-128247134346734/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:22 compute-0 ceph-mon[75840]: pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:22 compute-0 python3.9[249651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:23 compute-0 python3.9[249772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790262.3930857-1249-145795537720018/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:24 compute-0 python3.9[249922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:24 compute-0 ceph-mon[75840]: pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:25 compute-0 python3.9[250043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790263.877357-1249-142622528374272/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:25 compute-0 python3.9[250195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:26 compute-0 python3.9[250316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790265.358967-1249-142473801834477/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:26 compute-0 ceph-mon[75840]: pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:27 compute-0 sudo[250466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wevxmpfawhvpsppwijosrbztzwbxatzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790266.9502523-1332-232879354646323/AnsiballZ_file.py'
Nov 22 05:44:27 compute-0 sudo[250466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:27 compute-0 python3.9[250468]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:44:27 compute-0 sudo[250466]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:28 compute-0 sudo[250618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chyznbjxnvqcywmeuwhmgwgnmpuqglph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790267.7455533-1340-169740720485433/AnsiballZ_copy.py'
Nov 22 05:44:28 compute-0 sudo[250618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:28 compute-0 python3.9[250620]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:44:28 compute-0 sudo[250618]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:28 compute-0 ceph-mon[75840]: pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:29 compute-0 sudo[250770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvpqcaeqyfwubrqrboajktrlbvrpeygt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790268.5510392-1348-36348157735652/AnsiballZ_stat.py'
Nov 22 05:44:29 compute-0 sudo[250770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:29 compute-0 python3.9[250772]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:44:29 compute-0 sudo[250770]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:29 compute-0 sudo[250940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjuwmbafwpxsisqrgjuuzkaqbmhekhgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790269.489063-1356-197832007258183/AnsiballZ_stat.py'
Nov 22 05:44:29 compute-0 sudo[250940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:29 compute-0 sudo[250907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:29 compute-0 sudo[250907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:29 compute-0 sudo[250907]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:29 compute-0 sudo[250950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:44:29 compute-0 sudo[250950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:29 compute-0 sudo[250950]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 sudo[250975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:30 compute-0 sudo[250975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:30 compute-0 sudo[250975]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 python3.9[250948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:30 compute-0 sudo[250940]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 sudo[251000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:44:30 compute-0 sudo[251000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:30 compute-0 sudo[251162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spsinchrqptbwnvgvynsortjchowdvwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790269.489063-1356-197832007258183/AnsiballZ_copy.py'
Nov 22 05:44:30 compute-0 sudo[251162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:30 compute-0 sudo[251000]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:30 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6af12e0f-8163-47d4-9cf2-9167c6aeb644 does not exist
Nov 22 05:44:30 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 98a69249-7b26-4294-9adb-f7cdbb952225 does not exist
Nov 22 05:44:30 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev b4a7dfd6-46b8-4a0a-bdaa-d8081d1b4e31 does not exist
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:44:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:44:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:44:30 compute-0 python3.9[251166]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763790269.489063-1356-197832007258183/.source _original_basename=.d8m9nk6c follow=False checksum=e9d7a34410fea986092c054aa091a0303ea4e005 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 05:44:30 compute-0 sudo[251179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:30 compute-0 sudo[251179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:30 compute-0 sudo[251179]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 sudo[251162]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 sudo[251206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:44:30 compute-0 sudo[251206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:30 compute-0 sudo[251206]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:30 compute-0 sudo[251255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:30 compute-0 sudo[251255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:30 compute-0 sudo[251255]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:31 compute-0 sudo[251280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:44:31 compute-0 sudo[251280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.482000012 +0000 UTC m=+0.079430920 container create 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:44:31 compute-0 systemd[1]: Started libpod-conmon-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope.
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.441226244 +0000 UTC m=+0.038657232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.573712377 +0000 UTC m=+0.171143325 container init 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.581328221 +0000 UTC m=+0.178759139 container start 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.585671377 +0000 UTC m=+0.183102315 container attach 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:44:31 compute-0 peaceful_cannon[251489]: 167 167
Nov 22 05:44:31 compute-0 systemd[1]: libpod-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope: Deactivated successfully.
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.587885596 +0000 UTC m=+0.185316524 container died 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad4641e94bf59978a502ad239592bfcd74b8f21e65d4e6934ef71bebab60f57-merged.mount: Deactivated successfully.
Nov 22 05:44:31 compute-0 podman[251446]: 2025-11-22 05:44:31.65853204 +0000 UTC m=+0.255962938 container remove 9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_cannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:44:31 compute-0 systemd[1]: libpod-conmon-9843ea819ccdf72f3f362dce6f68a148bee4be7a8a22c1889becc990996a78ec.scope: Deactivated successfully.
Nov 22 05:44:31 compute-0 python3.9[251488]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:44:31 compute-0 podman[251517]: 2025-11-22 05:44:31.850570532 +0000 UTC m=+0.056427786 container create cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:44:31 compute-0 systemd[1]: Started libpod-conmon-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope.
Nov 22 05:44:31 compute-0 podman[251517]: 2025-11-22 05:44:31.818731582 +0000 UTC m=+0.024588856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:32 compute-0 podman[251517]: 2025-11-22 05:44:32.048262234 +0000 UTC m=+0.254119558 container init cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:44:32 compute-0 podman[251517]: 2025-11-22 05:44:32.062191126 +0000 UTC m=+0.268048410 container start cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:44:32 compute-0 podman[251517]: 2025-11-22 05:44:32.088899918 +0000 UTC m=+0.294757212 container attach cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:44:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:32 compute-0 python3.9[251687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:32 compute-0 ceph-mon[75840]: pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:33 compute-0 compassionate_proskuriakova[251557]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:44:33 compute-0 compassionate_proskuriakova[251557]: --> relative data size: 1.0
Nov 22 05:44:33 compute-0 compassionate_proskuriakova[251557]: --> All data devices are unavailable
Nov 22 05:44:33 compute-0 systemd[1]: libpod-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Deactivated successfully.
Nov 22 05:44:33 compute-0 podman[251517]: 2025-11-22 05:44:33.247779567 +0000 UTC m=+1.453636841 container died cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:44:33 compute-0 systemd[1]: libpod-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Consumed 1.141s CPU time.
Nov 22 05:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b46489483e7a90a8166994fbb8bdd5a88d7d88f9fa68f2a00b5116eed0d04f7-merged.mount: Deactivated successfully.
Nov 22 05:44:33 compute-0 python3.9[251828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790271.9904883-1382-203252441223459/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:33 compute-0 podman[251517]: 2025-11-22 05:44:33.379692195 +0000 UTC m=+1.585549449 container remove cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:44:33 compute-0 systemd[1]: libpod-conmon-cd6636828957a8da7347807b7dcd3e9385a90227eb990ac4f4bcf16ada314d50.scope: Deactivated successfully.
Nov 22 05:44:33 compute-0 sudo[251280]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:33 compute-0 sudo[251868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:33 compute-0 sudo[251868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:33 compute-0 sudo[251868]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:33 compute-0 sudo[251914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:44:33 compute-0 sudo[251914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:33 compute-0 sudo[251914]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:33 compute-0 sudo[251969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:33 compute-0 sudo[251969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:33 compute-0 sudo[251969]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:33 compute-0 sudo[251996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:44:33 compute-0 sudo[251996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.1526302 +0000 UTC m=+0.056203099 container create b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:44:34 compute-0 systemd[1]: Started libpod-conmon-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope.
Nov 22 05:44:34 compute-0 python3.9[252124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.120937476 +0000 UTC m=+0.024510355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.270524915 +0000 UTC m=+0.174097864 container init b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.279576336 +0000 UTC m=+0.183149235 container start b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.285642538 +0000 UTC m=+0.189215427 container attach b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:44:34 compute-0 affectionate_gagarin[252154]: 167 167
Nov 22 05:44:34 compute-0 systemd[1]: libpod-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope: Deactivated successfully.
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.289153512 +0000 UTC m=+0.192726411 container died b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c40f8b270026f3e212d25ce4149fe2d219b0c6d8a0fb508dd049a1e6e351627e-merged.mount: Deactivated successfully.
Nov 22 05:44:34 compute-0 podman[252137]: 2025-11-22 05:44:34.344076156 +0000 UTC m=+0.247649015 container remove b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:44:34 compute-0 systemd[1]: libpod-conmon-b859c7b4e7f8b41499345e158a2f8932f2b8f65848f6b58d77b4224b0729d61f.scope: Deactivated successfully.
Nov 22 05:44:34 compute-0 podman[252248]: 2025-11-22 05:44:34.522270239 +0000 UTC m=+0.051622128 container create 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:44:34 compute-0 systemd[1]: Started libpod-conmon-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope.
Nov 22 05:44:34 compute-0 podman[252248]: 2025-11-22 05:44:34.502295086 +0000 UTC m=+0.031647005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:34 compute-0 sshd-session[251995]: Invalid user  from 47.88.30.94 port 33944
Nov 22 05:44:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:34 compute-0 podman[252248]: 2025-11-22 05:44:34.685809961 +0000 UTC m=+0.215161920 container init 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 05:44:34 compute-0 podman[252248]: 2025-11-22 05:44:34.699103866 +0000 UTC m=+0.228455785 container start 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:44:34 compute-0 podman[252248]: 2025-11-22 05:44:34.706968545 +0000 UTC m=+0.236320464 container attach 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:44:34 compute-0 ceph-mon[75840]: pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:34 compute-0 python3.9[252317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763790273.5494294-1397-160924581257337/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 05:44:35 compute-0 fervent_volhard[252288]: {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     "0": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "devices": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "/dev/loop3"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             ],
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_name": "ceph_lv0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_size": "21470642176",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "name": "ceph_lv0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "tags": {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_name": "ceph",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.crush_device_class": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.encrypted": "0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_id": "0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.vdo": "0"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             },
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "vg_name": "ceph_vg0"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         }
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     ],
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     "1": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "devices": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "/dev/loop4"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             ],
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_name": "ceph_lv1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_size": "21470642176",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "name": "ceph_lv1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "tags": {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_name": "ceph",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.crush_device_class": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.encrypted": "0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_id": "1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.vdo": "0"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             },
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "vg_name": "ceph_vg1"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         }
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     ],
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     "2": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "devices": [
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "/dev/loop5"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             ],
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_name": "ceph_lv2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_size": "21470642176",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "name": "ceph_lv2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "tags": {
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.cluster_name": "ceph",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.crush_device_class": "",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.encrypted": "0",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osd_id": "2",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:                 "ceph.vdo": "0"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             },
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "type": "block",
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:             "vg_name": "ceph_vg2"
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:         }
Nov 22 05:44:35 compute-0 fervent_volhard[252288]:     ]
Nov 22 05:44:35 compute-0 fervent_volhard[252288]: }
Nov 22 05:44:35 compute-0 systemd[1]: libpod-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope: Deactivated successfully.
Nov 22 05:44:35 compute-0 podman[252248]: 2025-11-22 05:44:35.467304523 +0000 UTC m=+0.996656442 container died 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-37e45c09ffed04a0fd6a860c7e48706d1759b1eeecf42c6bbf60d3414684c2cd-merged.mount: Deactivated successfully.
Nov 22 05:44:35 compute-0 podman[252248]: 2025-11-22 05:44:35.551621582 +0000 UTC m=+1.080973471 container remove 2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_volhard, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:44:35 compute-0 systemd[1]: libpod-conmon-2ebee1fa668beefb10678881ef07684ccae700fc8f14eed7d1197c2a09f91716.scope: Deactivated successfully.
Nov 22 05:44:35 compute-0 sudo[251996]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:35 compute-0 podman[252423]: 2025-11-22 05:44:35.612010023 +0000 UTC m=+0.114483824 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:44:35 compute-0 sudo[252490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:35 compute-0 sudo[252490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:35 compute-0 sudo[252490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:35 compute-0 sudo[252532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyasolmslyguatvvptrvyabynqdbisdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790275.3101685-1414-274283168414498/AnsiballZ_container_config_data.py'
Nov 22 05:44:35 compute-0 sudo[252532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:35 compute-0 sudo[252538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:44:35 compute-0 sudo[252538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:35 compute-0 sudo[252538]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:35 compute-0 sudo[252564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:35 compute-0 sudo[252564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:35 compute-0 sudo[252564]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:35 compute-0 python3.9[252539]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 05:44:35 compute-0 sudo[252589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:44:35 compute-0 sudo[252589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:35 compute-0 sudo[252532]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.25680262 +0000 UTC m=+0.047048415 container create 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:44:36 compute-0 systemd[1]: Started libpod-conmon-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope.
Nov 22 05:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.238208994 +0000 UTC m=+0.028454819 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.35015055 +0000 UTC m=+0.140396435 container init 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.362992802 +0000 UTC m=+0.153238627 container start 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.367343638 +0000 UTC m=+0.157589503 container attach 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:44:36 compute-0 eager_mccarthy[252764]: 167 167
Nov 22 05:44:36 compute-0 systemd[1]: libpod-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope: Deactivated successfully.
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.371400437 +0000 UTC m=+0.161646262 container died 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3557e7e58c7a917e278f09fd1513bd122663d0803c0500ea1643f1bf56f62eb-merged.mount: Deactivated successfully.
Nov 22 05:44:36 compute-0 podman[252708]: 2025-11-22 05:44:36.42852847 +0000 UTC m=+0.218774295 container remove 1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:44:36 compute-0 systemd[1]: libpod-conmon-1ed5b7ec2e18da79a3da038d6d67e0d1748ceb6db84250120b0f3e5f1e9a5e6b.scope: Deactivated successfully.
Nov 22 05:44:36 compute-0 sudo[252837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjscszgurwlaelgjdtfhwifnvxbuudxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790276.173147-1423-198652368304166/AnsiballZ_container_config_hash.py'
Nov 22 05:44:36 compute-0 sudo[252837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:36 compute-0 podman[252845]: 2025-11-22 05:44:36.677438229 +0000 UTC m=+0.070626425 container create 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:44:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:36 compute-0 systemd[1]: Started libpod-conmon-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope.
Nov 22 05:44:36 compute-0 podman[252845]: 2025-11-22 05:44:36.641220443 +0000 UTC m=+0.034408659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:36 compute-0 ceph-mon[75840]: pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:44:36 compute-0 python3.9[252839]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 05:44:36 compute-0 podman[252845]: 2025-11-22 05:44:36.771098767 +0000 UTC m=+0.164287013 container init 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:44:36 compute-0 podman[252845]: 2025-11-22 05:44:36.780382695 +0000 UTC m=+0.173570931 container start 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:44:36 compute-0 podman[252845]: 2025-11-22 05:44:36.786075666 +0000 UTC m=+0.179263902 container attach 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 05:44:36 compute-0 sudo[252837]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.907 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:44:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.908 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:44:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:44:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:44:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:37 compute-0 sudo[253018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagjcrutjleitiecnhvpxzjipuouhitj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763790277.110761-1433-38965403406211/AnsiballZ_edpm_container_manage.py'
Nov 22 05:44:37 compute-0 sudo[253018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]: {
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_id": 1,
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "type": "bluestore"
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     },
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_id": 2,
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "type": "bluestore"
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     },
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_id": 0,
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:         "type": "bluestore"
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]:     }
Nov 22 05:44:37 compute-0 hopeful_herschel[252862]: }
Nov 22 05:44:37 compute-0 python3[253023]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 05:44:37 compute-0 systemd[1]: libpod-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope: Deactivated successfully.
Nov 22 05:44:37 compute-0 podman[252845]: 2025-11-22 05:44:37.728521872 +0000 UTC m=+1.121710098 container died 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d63adab2181c7c627f041599ea25d2406a5834ae70d759ed8bc4bd071d0007c-merged.mount: Deactivated successfully.
Nov 22 05:44:37 compute-0 podman[252845]: 2025-11-22 05:44:37.797752179 +0000 UTC m=+1.190940415 container remove 1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:44:37 compute-0 systemd[1]: libpod-conmon-1bcfdafeaa3a4ac422353913da7980fd0ce996ab7437f0c42c77c1034fd8c227.scope: Deactivated successfully.
Nov 22 05:44:37 compute-0 sudo[252589]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:44:37 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:44:37 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:37 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev af919446-7208-4a61-b04a-fa858562fecd does not exist
Nov 22 05:44:37 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 206ca234-e3cd-49d9-b8e7-9edba28a5591 does not exist
Nov 22 05:44:37 compute-0 sudo[253083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:44:37 compute-0 sudo[253083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:37 compute-0 sudo[253083]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:37 compute-0 sudo[253108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:44:38 compute-0 sudo[253108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:44:38 compute-0 sudo[253108]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:38 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:38 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:44:38 compute-0 ceph-mon[75840]: pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:40 compute-0 ceph-mon[75840]: pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:41 compute-0 sshd-session[251995]: Connection closed by invalid user  47.88.30.94 port 33944 [preauth]
Nov 22 05:44:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:43 compute-0 ceph-mon[75840]: pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:43 compute-0 podman[253160]: 2025-11-22 05:44:43.369748391 +0000 UTC m=+2.233404399 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:44:43
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.control', 'volumes', '.mgr']
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:44:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:44:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:46 compute-0 ceph-mon[75840]: pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:47 compute-0 podman[253201]: 2025-11-22 05:44:47.213028166 +0000 UTC m=+2.075794284 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:44:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:48 compute-0 ceph-mon[75840]: pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:48 compute-0 podman[253070]: 2025-11-22 05:44:48.189651314 +0000 UTC m=+10.381877708 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 05:44:48 compute-0 podman[253246]: 2025-11-22 05:44:48.567909323 +0000 UTC m=+0.037560453 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 05:44:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:49 compute-0 ceph-mon[75840]: pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:49 compute-0 podman[253246]: 2025-11-22 05:44:49.696648697 +0000 UTC m=+1.166299817 container create dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 05:44:49 compute-0 python3[253023]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 05:44:49 compute-0 sudo[253018]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:44:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:44:53 compute-0 sudo[253438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sovzmkhxlzxodfoypvvopwutebjbrnjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790292.5838647-1441-42825047508738/AnsiballZ_stat.py'
Nov 22 05:44:53 compute-0 sudo[253438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:53 compute-0 python3.9[253440]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:44:53 compute-0 sudo[253438]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:53 compute-0 ceph-mon[75840]: pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:54 compute-0 sudo[253592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsznqlgpqzioriurfvuntbmjjaudipvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790293.8141117-1453-191396455734887/AnsiballZ_container_config_data.py'
Nov 22 05:44:54 compute-0 sudo[253592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:54 compute-0 python3.9[253594]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 05:44:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:54 compute-0 sudo[253592]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:55 compute-0 sudo[253744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujaszeohhtqfrzbqjyzipuqcdetthvks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790294.7054029-1462-195345773949701/AnsiballZ_container_config_hash.py'
Nov 22 05:44:55 compute-0 sudo[253744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:55 compute-0 python3.9[253746]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 05:44:55 compute-0 sudo[253744]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:55 compute-0 ceph-mon[75840]: pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:56 compute-0 sudo[253896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putomuvhfryeepzudofghdkggvqfvqyy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763790295.6928344-1472-244102588239917/AnsiballZ_edpm_container_manage.py'
Nov 22 05:44:56 compute-0 sudo[253896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:56 compute-0 python3[253898]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 05:44:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:56 compute-0 podman[253934]: 2025-11-22 05:44:56.617662479 +0000 UTC m=+0.052797060 container create 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 22 05:44:56 compute-0 podman[253934]: 2025-11-22 05:44:56.591067459 +0000 UTC m=+0.026202070 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 05:44:56 compute-0 python3[253898]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 22 05:44:56 compute-0 sudo[253896]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:44:57 compute-0 sudo[254122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kekmnvwqvaqscfrhyvfayhlbjhzidlvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790297.0355744-1480-163865361792715/AnsiballZ_stat.py'
Nov 22 05:44:57 compute-0 sudo[254122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:57 compute-0 ceph-mon[75840]: pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:57 compute-0 python3.9[254124]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:44:57 compute-0 sudo[254122]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:58 compute-0 sudo[254276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrzroqfafbgxiejkhyfhybhtqqwkrqex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790297.9771333-1489-114450033173840/AnsiballZ_file.py'
Nov 22 05:44:58 compute-0 sudo[254276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:58 compute-0 python3.9[254278]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:44:58 compute-0 sudo[254276]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:59 compute-0 sudo[254427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbluzoispiwjvtdjjudhxxrokzvcqyxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790298.6206381-1489-28034645692811/AnsiballZ_copy.py'
Nov 22 05:44:59 compute-0 sudo[254427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:59 compute-0 python3.9[254429]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763790298.6206381-1489-28034645692811/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 05:44:59 compute-0 sudo[254427]: pam_unix(sudo:session): session closed for user root
Nov 22 05:44:59 compute-0 ceph-mon[75840]: pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:44:59 compute-0 sudo[254503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbijrrbnyunlcaouulnxtgkzyxlyejp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790298.6206381-1489-28034645692811/AnsiballZ_systemd.py'
Nov 22 05:44:59 compute-0 sudo[254503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:44:59 compute-0 python3.9[254505]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 05:44:59 compute-0 systemd[1]: Reloading.
Nov 22 05:45:00 compute-0 systemd-sysv-generator[254534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:45:00 compute-0 systemd-rc-local-generator[254530]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:45:00 compute-0 sudo[254503]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:00 compute-0 sudo[254614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhzmdfzgaedqqdwvdyarqbiocjpgnig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790298.6206381-1489-28034645692811/AnsiballZ_systemd.py'
Nov 22 05:45:00 compute-0 sudo[254614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:45:00 compute-0 python3.9[254616]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 05:45:00 compute-0 systemd[1]: Reloading.
Nov 22 05:45:01 compute-0 systemd-sysv-generator[254648]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 05:45:01 compute-0 systemd-rc-local-generator[254645]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 05:45:01 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 05:45:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:01 compute-0 podman[254655]: 2025-11-22 05:45:01.512071859 +0000 UTC m=+0.119429147 container init 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 22 05:45:01 compute-0 ceph-mon[75840]: pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:01 compute-0 podman[254655]: 2025-11-22 05:45:01.524402948 +0000 UTC m=+0.131760216 container start 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 05:45:01 compute-0 podman[254655]: nova_compute
Nov 22 05:45:01 compute-0 nova_compute[254670]: + sudo -E kolla_set_configs
Nov 22 05:45:01 compute-0 systemd[1]: Started nova_compute container.
Nov 22 05:45:01 compute-0 sudo[254614]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Validating config file
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying service configuration files
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Deleting /etc/ceph
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Creating directory /etc/ceph
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Writing out command to execute
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:01 compute-0 nova_compute[254670]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 05:45:01 compute-0 nova_compute[254670]: ++ cat /run_command
Nov 22 05:45:01 compute-0 nova_compute[254670]: + CMD=nova-compute
Nov 22 05:45:01 compute-0 nova_compute[254670]: + ARGS=
Nov 22 05:45:01 compute-0 nova_compute[254670]: + sudo kolla_copy_cacerts
Nov 22 05:45:01 compute-0 nova_compute[254670]: + [[ ! -n '' ]]
Nov 22 05:45:01 compute-0 nova_compute[254670]: + . kolla_extend_start
Nov 22 05:45:01 compute-0 nova_compute[254670]: Running command: 'nova-compute'
Nov 22 05:45:01 compute-0 nova_compute[254670]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 05:45:01 compute-0 nova_compute[254670]: + umask 0022
Nov 22 05:45:01 compute-0 nova_compute[254670]: + exec nova-compute
Nov 22 05:45:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:02 compute-0 python3.9[254832]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:45:03 compute-0 ceph-mon[75840]: pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.599 254674 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 05:45:03 compute-0 python3.9[254982]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.725 254674 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.753 254674 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:45:03 compute-0 nova_compute[254670]: 2025-11-22 05:45:03.753 254674 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 05:45:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.426 254674 INFO nova.virt.driver [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.551 254674 INFO nova.compute.provider_config [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.566 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.567 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.568 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.569 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.570 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.571 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.572 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.573 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.574 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.575 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.576 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.577 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.578 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.579 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.580 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.581 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.582 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.583 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.584 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.585 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.586 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.587 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.588 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 python3.9[255136]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.589 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.590 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.591 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.592 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.593 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.594 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.595 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.596 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.597 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.598 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.599 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.600 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.601 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.602 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.603 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.604 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.605 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.606 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.607 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.608 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.609 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.610 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.611 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.612 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.613 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.614 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.615 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.616 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.617 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.618 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.619 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.620 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.621 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.622 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.623 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.624 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.625 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.626 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.627 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.628 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.629 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.630 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.631 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.632 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.633 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.634 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.635 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.636 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.637 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.638 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.639 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 WARNING oslo_config.cfg [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 05:45:04 compute-0 nova_compute[254670]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 05:45:04 compute-0 nova_compute[254670]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 05:45:04 compute-0 nova_compute[254670]: and ``live_migration_inbound_addr`` respectively.
Nov 22 05:45:04 compute-0 nova_compute[254670]: ).  Its value may be silently ignored in the future.
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.640 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.641 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.642 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_secret_uuid        = 13fdadc6-d566-5465-9ac8-a148ef130da1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.643 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.644 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.645 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.646 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.647 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.648 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.649 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.650 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.651 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.652 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.653 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.654 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.655 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.656 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.657 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.658 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.659 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.660 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.661 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.662 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.663 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.664 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.665 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.666 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.667 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.668 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.669 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.670 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.671 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.672 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.673 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.674 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.675 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.676 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.677 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.678 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.679 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.680 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.681 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.682 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.683 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.684 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.685 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.686 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.687 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.688 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.689 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.690 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.691 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.692 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.693 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.694 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.695 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.696 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.697 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.698 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.699 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.700 254674 DEBUG oslo_service.service [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.701 254674 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.727 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.728 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 05:45:04 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 05:45:04 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.802 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd968fecd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.805 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd968fecd00> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.806 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Connection event '1' reason 'None'
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.827 254674 WARNING nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 05:45:04 compute-0 nova_compute[254670]: 2025-11-22 05:45:04.827 254674 DEBUG nova.virt.libvirt.volume.mount [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 05:45:05 compute-0 sudo[255340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvgtswzpokkmxandoofxjdpoxagzefmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790304.897848-1549-197109609999475/AnsiballZ_podman_container.py'
Nov 22 05:45:05 compute-0 sudo[255340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:45:05 compute-0 ceph-mon[75840]: pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:05 compute-0 python3.9[255348]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 05:45:05 compute-0 sudo[255340]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:05 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:45:05 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.887 254674 INFO nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 05:45:05 compute-0 nova_compute[254670]: 
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <host>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <uuid>66851c39-840f-46c8-adfc-77dc6a7d91a4</uuid>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <cpu>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <arch>x86_64</arch>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model>EPYC-Rome-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <vendor>AMD</vendor>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <microcode version='16777317'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <signature family='23' model='49' stepping='0'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='x2apic'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='tsc-deadline'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='osxsave'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='hypervisor'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='tsc_adjust'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='spec-ctrl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='stibp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='arch-capabilities'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='cmp_legacy'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='topoext'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='virt-ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='lbrv'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='tsc-scale'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='vmcb-clean'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='pause-filter'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='pfthreshold'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='svme-addr-chk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='rdctl-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='mds-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature name='pschange-mc-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <pages unit='KiB' size='4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <pages unit='KiB' size='2048'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <pages unit='KiB' size='1048576'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </cpu>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <power_management>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <suspend_mem/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </power_management>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <iommu support='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <migration_features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <live/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <uri_transports>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <uri_transport>tcp</uri_transport>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <uri_transport>rdma</uri_transport>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </uri_transports>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </migration_features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <topology>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <cells num='1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <cell id='0'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <memory unit='KiB'>7864320</memory>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <distances>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <sibling id='0' value='10'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           </distances>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           <cpus num='8'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:           </cpus>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         </cell>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </cells>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </topology>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <cache>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </cache>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <secmodel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model>selinux</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <doi>0</doi>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </secmodel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <secmodel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model>dac</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <doi>0</doi>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </secmodel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </host>
Nov 22 05:45:05 compute-0 nova_compute[254670]: 
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <guest>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <os_type>hvm</os_type>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <arch name='i686'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <wordsize>32</wordsize>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <domain type='qemu'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <domain type='kvm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </arch>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <pae/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <nonpae/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <acpi default='on' toggle='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <apic default='on' toggle='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <cpuselection/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <deviceboot/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <disksnapshot default='on' toggle='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <externalSnapshot/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </guest>
Nov 22 05:45:05 compute-0 nova_compute[254670]: 
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <guest>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <os_type>hvm</os_type>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <arch name='x86_64'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <wordsize>64</wordsize>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <domain type='qemu'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <domain type='kvm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </arch>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <acpi default='on' toggle='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <apic default='on' toggle='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <cpuselection/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <deviceboot/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <disksnapshot default='on' toggle='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <externalSnapshot/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </guest>
Nov 22 05:45:05 compute-0 nova_compute[254670]: 
Nov 22 05:45:05 compute-0 nova_compute[254670]: </capabilities>
Nov 22 05:45:05 compute-0 nova_compute[254670]: 
Nov 22 05:45:05 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.895 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 05:45:05 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.921 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 05:45:05 compute-0 nova_compute[254670]: <domainCapabilities>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <domain>kvm</domain>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <arch>i686</arch>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <vcpu max='240'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <iothreads supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <os supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <enum name='firmware'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <loader supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>rom</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pflash</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='readonly'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>yes</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='secure'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </loader>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </os>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <cpu>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='maximumMigratable'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <vendor>AMD</vendor>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='succor'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='custom' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-128'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-256'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-512'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='IvyBridge'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='KnightsMill'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SierraForest'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Snowridge'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='athlon'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='athlon-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='core2duo'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='core2duo-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='coreduo'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='coreduo-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='n270'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='n270-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='phenom'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='phenom-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </cpu>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <memoryBacking supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <enum name='sourceType'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <value>file</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <value>anonymous</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <value>memfd</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </memoryBacking>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <devices>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <disk supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='diskDevice'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>disk</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>cdrom</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>floppy</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>lun</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>ide</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>fdc</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>sata</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </disk>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <graphics supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vnc</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>egl-headless</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </graphics>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <video supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='modelType'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vga</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>cirrus</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>none</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>bochs</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>ramfb</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </video>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <hostdev supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='mode'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>subsystem</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='startupPolicy'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>mandatory</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>requisite</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>optional</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='subsysType'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pci</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='capsType'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='pciBackend'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </hostdev>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <rng supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>random</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>egd</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </rng>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <filesystem supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='driverType'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>path</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>handle</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>virtiofs</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </filesystem>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <tpm supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>tpm-tis</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>tpm-crb</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>emulator</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>external</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='backendVersion'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>2.0</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </tpm>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <redirdev supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </redirdev>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <channel supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </channel>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <crypto supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='model'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>qemu</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </crypto>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <interface supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='backendType'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>passt</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </interface>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <panic supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>isa</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>hyperv</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </panic>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <console supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>null</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vc</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>dev</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>file</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pipe</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>stdio</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>udp</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>tcp</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>qemu-vdagent</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </console>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </devices>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <features>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <gic supported='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <genid supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <backup supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <async-teardown supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <ps2 supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <sev supported='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <sgx supported='no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <hyperv supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='features'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>relaxed</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vapic</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>spinlocks</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vpindex</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>runtime</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>synic</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>stimer</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>reset</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>vendor_id</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>frequencies</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>reenlightenment</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>tlbflush</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>ipi</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>avic</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>emsr_bitmap</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>xmm_input</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <defaults>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </defaults>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </hyperv>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <launchSecurity supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='sectype'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>tdx</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </launchSecurity>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </features>
Nov 22 05:45:05 compute-0 nova_compute[254670]: </domainCapabilities>
Nov 22 05:45:05 compute-0 nova_compute[254670]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:05 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.928 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 05:45:05 compute-0 nova_compute[254670]: <domainCapabilities>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <domain>kvm</domain>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <arch>i686</arch>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <vcpu max='4096'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <iothreads supported='yes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <os supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <enum name='firmware'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <loader supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>rom</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>pflash</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='readonly'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>yes</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='secure'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </loader>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   </os>
Nov 22 05:45:05 compute-0 nova_compute[254670]:   <cpu>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <enum name='maximumMigratable'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <vendor>AMD</vendor>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='succor'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:05 compute-0 nova_compute[254670]:     <mode name='custom' supported='yes'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Denverton-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='EPYC-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-128'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-256'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx10-512'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Haswell-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:05 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:05 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 podman[255373]: 2025-11-22 05:45:06.026383531 +0000 UTC m=+0.113874537 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </cpu>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <memoryBacking supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <enum name='sourceType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>anonymous</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>memfd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </memoryBacking>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <disk supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='diskDevice'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>disk</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cdrom</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>floppy</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>lun</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>fdc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>sata</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </disk>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <graphics supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vnc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egl-headless</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </graphics>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <video supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='modelType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vga</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cirrus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>none</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>bochs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ramfb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </video>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hostdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='mode'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>subsystem</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='startupPolicy'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>mandatory</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>requisite</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>optional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='subsysType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pci</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='capsType'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='pciBackend'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hostdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <rng supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>random</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </rng>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <filesystem supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='driverType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>path</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>handle</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtiofs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </filesystem>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <tpm supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-tis</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-crb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emulator</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>external</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendVersion'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>2.0</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </tpm>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <redirdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </redirdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <channel supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </channel>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <crypto supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </crypto>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <interface supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>passt</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </interface>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <panic supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>isa</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>hyperv</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </panic>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <console supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>null</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dev</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pipe</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stdio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>udp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tcp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu-vdagent</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </console>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <features>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <gic supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <genid supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backup supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <async-teardown supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <ps2 supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sev supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sgx supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hyperv supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='features'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>relaxed</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vapic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>spinlocks</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vpindex</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>runtime</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>synic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stimer</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reset</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vendor_id</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>frequencies</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reenlightenment</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tlbflush</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ipi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>avic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emsr_bitmap</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>xmm_input</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hyperv>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <launchSecurity supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='sectype'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tdx</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </launchSecurity>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </features>
Nov 22 05:45:06 compute-0 nova_compute[254670]: </domainCapabilities>
Nov 22 05:45:06 compute-0 nova_compute[254670]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.955 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:05.960 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 05:45:06 compute-0 nova_compute[254670]: <domainCapabilities>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <domain>kvm</domain>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <arch>x86_64</arch>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <vcpu max='240'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <iothreads supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <os supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <enum name='firmware'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <loader supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>rom</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pflash</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='readonly'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>yes</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='secure'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </loader>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </os>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <cpu>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='maximumMigratable'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <vendor>AMD</vendor>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='succor'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='custom' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-128'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-256'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-512'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </cpu>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <memoryBacking supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <enum name='sourceType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>anonymous</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>memfd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </memoryBacking>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <disk supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='diskDevice'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>disk</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cdrom</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>floppy</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>lun</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ide</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>fdc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>sata</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </disk>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <graphics supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vnc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egl-headless</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </graphics>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <video supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='modelType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vga</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cirrus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>none</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>bochs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ramfb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </video>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hostdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='mode'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>subsystem</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='startupPolicy'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>mandatory</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>requisite</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>optional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='subsysType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pci</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='capsType'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='pciBackend'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hostdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <rng supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>random</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </rng>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <filesystem supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='driverType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>path</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>handle</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtiofs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </filesystem>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <tpm supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-tis</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-crb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emulator</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>external</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendVersion'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>2.0</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </tpm>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <redirdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </redirdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <channel supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </channel>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <crypto supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </crypto>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <interface supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>passt</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </interface>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <panic supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>isa</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>hyperv</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </panic>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <console supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>null</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dev</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pipe</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stdio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>udp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tcp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu-vdagent</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </console>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <features>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <gic supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <genid supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backup supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <async-teardown supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <ps2 supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sev supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sgx supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hyperv supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='features'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>relaxed</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vapic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>spinlocks</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vpindex</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>runtime</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>synic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stimer</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reset</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vendor_id</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>frequencies</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reenlightenment</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tlbflush</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ipi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>avic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emsr_bitmap</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>xmm_input</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hyperv>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <launchSecurity supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='sectype'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tdx</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </launchSecurity>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </features>
Nov 22 05:45:06 compute-0 nova_compute[254670]: </domainCapabilities>
Nov 22 05:45:06 compute-0 nova_compute[254670]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.045 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 05:45:06 compute-0 nova_compute[254670]: <domainCapabilities>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <domain>kvm</domain>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <arch>x86_64</arch>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <vcpu max='4096'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <iothreads supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <os supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <enum name='firmware'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>efi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <loader supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>rom</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pflash</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='readonly'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>yes</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='secure'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>yes</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>no</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </loader>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </os>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <cpu>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='maximumMigratable'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>on</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>off</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <vendor>AMD</vendor>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='succor'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <mode name='custom' supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Denverton-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='auto-ibrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amd-psfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='stibp-always-on'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='EPYC-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-128'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-256'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx10-512'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='prefetchiti'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Haswell-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512er'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512pf'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fma4'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tbm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xop'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='amx-tile'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-bf16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-fp16'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bitalg'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrc'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fzrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='la57'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='taa-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xfd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ifma'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cmpccxadd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fbsdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='fsrs'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ibrs-all'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mcdt-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pbrsb-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='psdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='serialize'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vaes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='hle'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='rtm'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512bw'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512cd'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512dq'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512f'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='avx512vl'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='invpcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pcid'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='pku'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='mpx'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='core-capability'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='split-lock-detect'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='cldemote'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='erms'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='gfni'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdir64b'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='movdiri'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='xsaves'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='athlon-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='core2duo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='coreduo-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='n270-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='ss'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <blockers model='phenom-v1'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnow'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <feature name='3dnowext'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </blockers>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </mode>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </cpu>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <memoryBacking supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <enum name='sourceType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>anonymous</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <value>memfd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </memoryBacking>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <disk supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='diskDevice'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>disk</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cdrom</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>floppy</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>lun</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>fdc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>sata</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </disk>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <graphics supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vnc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egl-headless</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </graphics>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <video supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='modelType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vga</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>cirrus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>none</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>bochs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ramfb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </video>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hostdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='mode'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>subsystem</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='startupPolicy'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>mandatory</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>requisite</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>optional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='subsysType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pci</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>scsi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='capsType'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='pciBackend'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hostdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <rng supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtio-non-transitional</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>random</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>egd</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </rng>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <filesystem supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='driverType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>path</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>handle</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>virtiofs</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </filesystem>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <tpm supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-tis</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tpm-crb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emulator</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>external</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendVersion'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>2.0</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </tpm>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <redirdev supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='bus'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>usb</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </redirdev>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <channel supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </channel>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <crypto supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendModel'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>builtin</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </crypto>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <interface supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='backendType'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>default</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>passt</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </interface>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <panic supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='model'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>isa</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>hyperv</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </panic>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <console supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='type'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>null</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vc</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pty</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dev</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>file</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>pipe</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stdio</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>udp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tcp</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>unix</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>qemu-vdagent</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>dbus</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </console>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </devices>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   <features>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <gic supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <genid supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <backup supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <async-teardown supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <ps2 supported='yes'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sev supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <sgx supported='no'/>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <hyperv supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='features'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>relaxed</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vapic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>spinlocks</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vpindex</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>runtime</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>synic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>stimer</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reset</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>vendor_id</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>frequencies</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>reenlightenment</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tlbflush</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>ipi</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>avic</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>emsr_bitmap</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>xmm_input</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </defaults>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </hyperv>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     <launchSecurity supported='yes'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       <enum name='sectype'>
Nov 22 05:45:06 compute-0 nova_compute[254670]:         <value>tdx</value>
Nov 22 05:45:06 compute-0 nova_compute[254670]:       </enum>
Nov 22 05:45:06 compute-0 nova_compute[254670]:     </launchSecurity>
Nov 22 05:45:06 compute-0 nova_compute[254670]:   </features>
Nov 22 05:45:06 compute-0 nova_compute[254670]: </domainCapabilities>
Nov 22 05:45:06 compute-0 nova_compute[254670]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.159 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 DEBUG nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.160 254674 INFO nova.virt.libvirt.host [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Secure Boot support detected
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.163 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.164 254674 INFO nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.181 254674 DEBUG nova.virt.libvirt.driver [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.224 254674 INFO nova.virt.node [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.246 254674 WARNING nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Compute nodes ['7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.296 254674 INFO nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.339 254674 WARNING nova.compute.manager [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.340 254674 DEBUG oslo_concurrency.lockutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.341 254674 DEBUG nova.compute.resource_tracker [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.341 254674 DEBUG oslo_concurrency.processutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:45:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:06 compute-0 sudo[255572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clogydoizgrdmtodneoagrnpxmdvlynk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790306.1299376-1557-173341688602807/AnsiballZ_systemd.py'
Nov 22 05:45:06 compute-0 sudo[255572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:45:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:45:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128570508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.807 254674 DEBUG oslo_concurrency.processutils [None req-e4a7a7c1-3a9a-478f-b25c-6f36b4328e91 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:45:06 compute-0 python3.9[255574]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 05:45:06 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 05:45:06 compute-0 systemd[1]: Stopping nova_compute container...
Nov 22 05:45:06 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.959 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.960 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 05:45:06 compute-0 nova_compute[254670]: 2025-11-22 05:45:06.960 254674 DEBUG oslo_concurrency.lockutils [None req-ce937e1a-77dc-4ba6-93e9-fa7249ed387f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 05:45:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:07 compute-0 virtqemud[255182]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 05:45:07 compute-0 virtqemud[255182]: hostname: compute-0
Nov 22 05:45:07 compute-0 virtqemud[255182]: End of file while reading data: Input/output error
Nov 22 05:45:07 compute-0 systemd[1]: libpod-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d.scope: Deactivated successfully.
Nov 22 05:45:07 compute-0 systemd[1]: libpod-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d.scope: Consumed 3.518s CPU time.
Nov 22 05:45:07 compute-0 podman[255591]: 2025-11-22 05:45:07.41208499 +0000 UTC m=+0.500407098 container died 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 22 05:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d-userdata-shm.mount: Deactivated successfully.
Nov 22 05:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203-merged.mount: Deactivated successfully.
Nov 22 05:45:08 compute-0 ceph-mon[75840]: pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/128570508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:08 compute-0 podman[255591]: 2025-11-22 05:45:08.584300635 +0000 UTC m=+1.672622703 container cleanup 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:45:08 compute-0 podman[255591]: nova_compute
Nov 22 05:45:08 compute-0 podman[255631]: nova_compute
Nov 22 05:45:08 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 05:45:08 compute-0 systemd[1]: Stopped nova_compute container.
Nov 22 05:45:08 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 05:45:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cbfb8ca269b8424d1494a29eac0161941af7167af76420e9229246295016203/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:08 compute-0 podman[255644]: 2025-11-22 05:45:08.870445946 +0000 UTC m=+0.150900456 container init 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm)
Nov 22 05:45:08 compute-0 podman[255644]: 2025-11-22 05:45:08.880959637 +0000 UTC m=+0.161414117 container start 348046734b16960f371f783aab4fa0e34b4a40f80d4364fe3bb3a5c98d6d4c4d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, config_id=edpm, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:45:08 compute-0 podman[255644]: nova_compute
Nov 22 05:45:08 compute-0 nova_compute[255660]: + sudo -E kolla_set_configs
Nov 22 05:45:08 compute-0 systemd[1]: Started nova_compute container.
Nov 22 05:45:08 compute-0 sudo[255572]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Validating config file
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying service configuration files
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /etc/ceph
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Creating directory /etc/ceph
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Writing out command to execute
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:08 compute-0 nova_compute[255660]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 05:45:08 compute-0 nova_compute[255660]: ++ cat /run_command
Nov 22 05:45:08 compute-0 nova_compute[255660]: + CMD=nova-compute
Nov 22 05:45:08 compute-0 nova_compute[255660]: + ARGS=
Nov 22 05:45:08 compute-0 nova_compute[255660]: + sudo kolla_copy_cacerts
Nov 22 05:45:09 compute-0 nova_compute[255660]: + [[ ! -n '' ]]
Nov 22 05:45:09 compute-0 nova_compute[255660]: + . kolla_extend_start
Nov 22 05:45:09 compute-0 nova_compute[255660]: Running command: 'nova-compute'
Nov 22 05:45:09 compute-0 nova_compute[255660]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 05:45:09 compute-0 nova_compute[255660]: + umask 0022
Nov 22 05:45:09 compute-0 nova_compute[255660]: + exec nova-compute
Nov 22 05:45:09 compute-0 ceph-mon[75840]: pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:09 compute-0 sudo[255821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfvdsoogvrbsixecknnsmyouttextaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763790309.2812312-1566-255521453261333/AnsiballZ_podman_container.py'
Nov 22 05:45:09 compute-0 sudo[255821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 05:45:09 compute-0 python3.9[255823]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 05:45:10 compute-0 systemd[1]: Started libpod-conmon-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope.
Nov 22 05:45:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:10 compute-0 podman[255849]: 2025-11-22 05:45:10.198031864 +0000 UTC m=+0.171447453 container init dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible)
Nov 22 05:45:10 compute-0 podman[255849]: 2025-11-22 05:45:10.210084676 +0000 UTC m=+0.183500225 container start dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3)
Nov 22 05:45:10 compute-0 python3.9[255823]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 05:45:10 compute-0 nova_compute_init[255870]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 05:45:10 compute-0 systemd[1]: libpod-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope: Deactivated successfully.
Nov 22 05:45:10 compute-0 podman[255871]: 2025-11-22 05:45:10.300830837 +0000 UTC m=+0.046696267 container died dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a4560cb10870c5132ded9057c952a70378f7f6d50c073e17fee110815ecf360-merged.mount: Deactivated successfully.
Nov 22 05:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556-userdata-shm.mount: Deactivated successfully.
Nov 22 05:45:10 compute-0 podman[255881]: 2025-11-22 05:45:10.363846157 +0000 UTC m=+0.054575966 container cleanup dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true)
Nov 22 05:45:10 compute-0 systemd[1]: libpod-conmon-dc599c725116ba847223ccb324bed5bcc999b5a521826699fa2887f1c1f61556.scope: Deactivated successfully.
Nov 22 05:45:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:10 compute-0 sudo[255821]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:10 compute-0 sshd-session[224701]: Connection closed by 192.168.122.30 port 55190
Nov 22 05:45:10 compute-0 sshd-session[224698]: pam_unix(sshd:session): session closed for user zuul
Nov 22 05:45:10 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 22 05:45:10 compute-0 systemd[1]: session-49.scope: Consumed 2min 44.941s CPU time.
Nov 22 05:45:10 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Nov 22 05:45:10 compute-0 systemd-logind[798]: Removed session 49.
Nov 22 05:45:10 compute-0 nova_compute[255660]: 2025-11-22 05:45:10.964 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:10 compute-0 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:10 compute-0 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 05:45:10 compute-0 nova_compute[255660]: 2025-11-22 05:45:10.965 255664 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.093 255664 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.119 255664 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.120 255664 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 05:45:11 compute-0 ceph-mon[75840]: pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.680 255664 INFO nova.virt.driver [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.792 255664 INFO nova.compute.provider_config [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.812 255664 DEBUG oslo_concurrency.lockutils [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.813 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.814 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.815 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.816 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.817 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.818 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.819 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.820 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.821 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.822 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.823 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.824 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.825 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.826 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.827 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.828 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.829 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.830 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.831 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.832 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.833 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.834 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.835 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.836 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.837 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.838 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.839 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.840 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.841 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.842 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.843 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.844 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.845 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.846 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.847 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.848 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.849 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.850 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.851 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.852 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.853 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.854 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.855 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.856 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.857 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.858 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.859 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.860 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.861 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.862 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.863 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.864 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.865 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.866 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.867 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.868 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.869 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.870 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.871 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.872 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.873 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.874 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.875 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.876 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.877 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.878 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.879 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.880 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.881 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.882 255664 WARNING oslo_config.cfg [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 05:45:11 compute-0 nova_compute[255660]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 05:45:11 compute-0 nova_compute[255660]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 05:45:11 compute-0 nova_compute[255660]: and ``live_migration_inbound_addr`` respectively.
Nov 22 05:45:11 compute-0 nova_compute[255660]: ).  Its value may be silently ignored in the future.
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.883 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.884 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_secret_uuid        = 13fdadc6-d566-5465-9ac8-a148ef130da1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.885 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.886 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.887 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.888 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.889 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.890 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.891 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.892 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.893 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.894 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.895 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.896 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.897 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.898 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.899 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.900 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.901 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.902 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.903 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.904 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.905 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.906 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.907 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.908 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.909 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.910 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.911 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.912 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.913 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.914 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.915 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.916 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.917 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.918 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.919 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.920 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.921 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.922 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.923 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.924 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.925 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.926 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.927 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.928 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.929 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.930 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.931 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.932 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.933 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.934 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.935 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.936 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.937 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.938 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.939 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.940 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.941 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.942 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.943 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.944 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.945 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.945 255664 DEBUG oslo_service.service [None req-1d0b2be3-ba1e-42e6-b67a-ed77c08fa67d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.946 255664 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.965 255664 INFO nova.virt.node [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.965 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.966 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.979 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f21ba2ea550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.982 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f21ba2ea550> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.983 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Connection event '1' reason 'None'
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.988 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 05:45:11 compute-0 nova_compute[255660]: 
Nov 22 05:45:11 compute-0 nova_compute[255660]:   <host>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <uuid>66851c39-840f-46c8-adfc-77dc6a7d91a4</uuid>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <cpu>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <arch>x86_64</arch>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <model>EPYC-Rome-v4</model>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <vendor>AMD</vendor>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <microcode version='16777317'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <signature family='23' model='49' stepping='0'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='x2apic'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='tsc-deadline'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='osxsave'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='hypervisor'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='tsc_adjust'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='spec-ctrl'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='stibp'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='arch-capabilities'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='ssbd'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='cmp_legacy'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='topoext'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='virt-ssbd'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='lbrv'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='tsc-scale'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='vmcb-clean'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='pause-filter'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='pfthreshold'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='svme-addr-chk'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='rdctl-no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='mds-no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <feature name='pschange-mc-no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <pages unit='KiB' size='4'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <pages unit='KiB' size='2048'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <pages unit='KiB' size='1048576'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </cpu>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <power_management>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <suspend_mem/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </power_management>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <iommu support='no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <migration_features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <live/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <uri_transports>
Nov 22 05:45:11 compute-0 nova_compute[255660]:         <uri_transport>tcp</uri_transport>
Nov 22 05:45:11 compute-0 nova_compute[255660]:         <uri_transport>rdma</uri_transport>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       </uri_transports>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </migration_features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <topology>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <cells num='1'>
Nov 22 05:45:11 compute-0 nova_compute[255660]:         <cell id='0'>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <memory unit='KiB'>7864320</memory>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <distances>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <sibling id='0' value='10'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           </distances>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           <cpus num='8'>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:           </cpus>
Nov 22 05:45:11 compute-0 nova_compute[255660]:         </cell>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       </cells>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </topology>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <cache>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </cache>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <secmodel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <model>selinux</model>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <doi>0</doi>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </secmodel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <secmodel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <model>dac</model>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <doi>0</doi>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </secmodel>
Nov 22 05:45:11 compute-0 nova_compute[255660]:   </host>
Nov 22 05:45:11 compute-0 nova_compute[255660]: 
Nov 22 05:45:11 compute-0 nova_compute[255660]:   <guest>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <os_type>hvm</os_type>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <arch name='i686'>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <wordsize>32</wordsize>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <domain type='qemu'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <domain type='kvm'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </arch>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <pae/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <nonpae/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <acpi default='on' toggle='yes'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <apic default='on' toggle='no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <cpuselection/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <deviceboot/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <disksnapshot default='on' toggle='no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <externalSnapshot/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:   </guest>
Nov 22 05:45:11 compute-0 nova_compute[255660]: 
Nov 22 05:45:11 compute-0 nova_compute[255660]:   <guest>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <os_type>hvm</os_type>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <arch name='x86_64'>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <wordsize>64</wordsize>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <domain type='qemu'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <domain type='kvm'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </arch>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     <features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <acpi default='on' toggle='yes'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <apic default='on' toggle='no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <cpuselection/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <deviceboot/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <disksnapshot default='on' toggle='no'/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:       <externalSnapshot/>
Nov 22 05:45:11 compute-0 nova_compute[255660]:     </features>
Nov 22 05:45:11 compute-0 nova_compute[255660]:   </guest>
Nov 22 05:45:11 compute-0 nova_compute[255660]: 
Nov 22 05:45:11 compute-0 nova_compute[255660]: </capabilities>
Nov 22 05:45:11 compute-0 nova_compute[255660]: 
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.996 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 05:45:11 compute-0 nova_compute[255660]: 2025-11-22 05:45:11.998 255664 DEBUG nova.virt.libvirt.volume.mount [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.002 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 05:45:12 compute-0 nova_compute[255660]: <domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <domain>kvm</domain>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <arch>i686</arch>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <vcpu max='240'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <iothreads supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <os supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='firmware'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <loader supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>rom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pflash</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='readonly'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>yes</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='secure'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </loader>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </os>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='maximumMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <vendor>AMD</vendor>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='succor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='custom' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-128'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-256'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-512'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <memoryBacking supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='sourceType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>anonymous</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>memfd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </memoryBacking>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <disk supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='diskDevice'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>disk</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cdrom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>floppy</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>lun</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ide</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>fdc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>sata</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </disk>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <graphics supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vnc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egl-headless</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </graphics>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <video supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='modelType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vga</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cirrus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>none</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>bochs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ramfb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </video>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hostdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='mode'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>subsystem</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='startupPolicy'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>mandatory</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>requisite</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>optional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='subsysType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pci</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='capsType'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='pciBackend'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hostdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <rng supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>random</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </rng>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <filesystem supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='driverType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>path</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>handle</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtiofs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </filesystem>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <tpm supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-tis</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-crb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emulator</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>external</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendVersion'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>2.0</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </tpm>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <redirdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </redirdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <channel supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </channel>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <crypto supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </crypto>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <interface supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>passt</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </interface>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <panic supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>isa</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>hyperv</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </panic>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <console supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>null</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dev</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pipe</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stdio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>udp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tcp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu-vdagent</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </console>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <features>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <gic supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <genid supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backup supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <async-teardown supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <ps2 supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sev supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sgx supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hyperv supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='features'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>relaxed</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vapic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>spinlocks</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vpindex</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>runtime</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>synic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stimer</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reset</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vendor_id</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>frequencies</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reenlightenment</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tlbflush</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ipi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>avic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emsr_bitmap</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>xmm_input</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hyperv>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <launchSecurity supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='sectype'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tdx</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </launchSecurity>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </features>
Nov 22 05:45:12 compute-0 nova_compute[255660]: </domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.010 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 05:45:12 compute-0 nova_compute[255660]: <domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <domain>kvm</domain>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <arch>i686</arch>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <vcpu max='4096'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <iothreads supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <os supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='firmware'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <loader supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>rom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pflash</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='readonly'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>yes</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='secure'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </loader>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </os>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='maximumMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <vendor>AMD</vendor>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='succor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='custom' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-128'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-256'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-512'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <memoryBacking supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='sourceType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>anonymous</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>memfd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </memoryBacking>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <disk supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='diskDevice'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>disk</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cdrom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>floppy</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>lun</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>fdc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>sata</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </disk>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <graphics supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vnc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egl-headless</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </graphics>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <video supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='modelType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vga</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cirrus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>none</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>bochs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ramfb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </video>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hostdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='mode'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>subsystem</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='startupPolicy'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>mandatory</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>requisite</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>optional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='subsysType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pci</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='capsType'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='pciBackend'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hostdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <rng supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>random</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </rng>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <filesystem supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='driverType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>path</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>handle</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtiofs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </filesystem>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <tpm supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-tis</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-crb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emulator</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>external</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendVersion'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>2.0</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </tpm>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <redirdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </redirdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <channel supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </channel>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <crypto supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </crypto>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <interface supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>passt</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </interface>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <panic supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>isa</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>hyperv</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </panic>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <console supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>null</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dev</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pipe</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stdio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>udp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tcp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu-vdagent</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </console>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <features>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <gic supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <genid supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backup supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <async-teardown supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <ps2 supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sev supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sgx supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hyperv supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='features'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>relaxed</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vapic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>spinlocks</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vpindex</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>runtime</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>synic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stimer</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reset</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vendor_id</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>frequencies</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reenlightenment</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tlbflush</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ipi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>avic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emsr_bitmap</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>xmm_input</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hyperv>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <launchSecurity supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='sectype'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tdx</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </launchSecurity>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </features>
Nov 22 05:45:12 compute-0 nova_compute[255660]: </domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.032 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.038 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 05:45:12 compute-0 nova_compute[255660]: <domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <domain>kvm</domain>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <arch>x86_64</arch>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <vcpu max='240'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <iothreads supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <os supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='firmware'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <loader supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>rom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pflash</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='readonly'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>yes</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='secure'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </loader>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </os>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='maximumMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <vendor>AMD</vendor>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='succor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='custom' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-128'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-256'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-512'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <memoryBacking supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='sourceType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>anonymous</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>memfd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </memoryBacking>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <disk supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='diskDevice'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>disk</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cdrom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>floppy</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>lun</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ide</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>fdc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>sata</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </disk>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <graphics supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vnc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egl-headless</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </graphics>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <video supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='modelType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vga</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cirrus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>none</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>bochs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ramfb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </video>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hostdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='mode'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>subsystem</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='startupPolicy'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>mandatory</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>requisite</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>optional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='subsysType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pci</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='capsType'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='pciBackend'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hostdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <rng supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>random</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </rng>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <filesystem supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='driverType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>path</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>handle</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtiofs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </filesystem>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <tpm supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-tis</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-crb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emulator</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>external</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendVersion'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>2.0</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </tpm>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <redirdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </redirdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <channel supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </channel>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <crypto supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </crypto>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <interface supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>passt</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </interface>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <panic supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>isa</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>hyperv</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </panic>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <console supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>null</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dev</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pipe</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stdio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>udp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tcp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu-vdagent</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </console>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <features>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <gic supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <genid supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backup supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <async-teardown supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <ps2 supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sev supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sgx supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hyperv supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='features'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>relaxed</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vapic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>spinlocks</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vpindex</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>runtime</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>synic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stimer</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reset</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vendor_id</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>frequencies</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reenlightenment</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tlbflush</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ipi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>avic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emsr_bitmap</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>xmm_input</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hyperv>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <launchSecurity supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='sectype'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tdx</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </launchSecurity>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </features>
Nov 22 05:45:12 compute-0 nova_compute[255660]: </domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.100 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 05:45:12 compute-0 nova_compute[255660]: <domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <domain>kvm</domain>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <arch>x86_64</arch>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <vcpu max='4096'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <iothreads supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <os supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='firmware'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>efi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <loader supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>rom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pflash</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='readonly'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>yes</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='secure'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>yes</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>no</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </loader>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </os>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-passthrough' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='hostPassthroughMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='maximum' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='maximumMigratable'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>on</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>off</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='host-model' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <vendor>AMD</vendor>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='x2apic'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='hypervisor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='stibp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='overflow-recov'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='succor'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lbrv'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='tsc-scale'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='flushbyasid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pause-filter'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='pfthreshold'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <feature policy='disable' name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <mode name='custom' supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Broadwell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Cooperlake-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Denverton-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Dhyana-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='auto-ibrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Milan-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amd-psfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='no-nested-data-bp'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='null-sel-clr-base'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='stibp-always-on'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-Rome-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='EPYC-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='GraniteRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-128'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-256'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx10-512'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='prefetchiti'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Haswell-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v6'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Icelake-Server-v7'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='IvyBridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='KnightsMill-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4fmaps'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-4vnniw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512er'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512pf'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G4-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Opteron_G5-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fma4'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tbm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xop'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SapphireRapids-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='amx-tile'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-bf16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-fp16'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512-vpopcntdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bitalg'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vbmi2'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrc'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fzrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='la57'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='taa-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='tsx-ldtrk'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xfd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='SierraForest-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ifma'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-ne-convert'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx-vnni-int8'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='bus-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cmpccxadd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fbsdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='fsrs'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ibrs-all'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mcdt-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pbrsb-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='psdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='sbdr-ssdp-no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='serialize'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vaes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='vpclmulqdq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Client-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='hle'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='rtm'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Skylake-Server-v5'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512bw'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512cd'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512dq'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512f'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='avx512vl'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='invpcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pcid'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='pku'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='mpx'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v2'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v3'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='core-capability'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='split-lock-detect'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='Snowridge-v4'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='cldemote'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='erms'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='gfni'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdir64b'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='movdiri'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='xsaves'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='athlon-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='core2duo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='coreduo-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='n270-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='ss'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <blockers model='phenom-v1'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnow'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <feature name='3dnowext'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </blockers>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </mode>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </cpu>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <memoryBacking supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <enum name='sourceType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>anonymous</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <value>memfd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </memoryBacking>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <disk supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='diskDevice'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>disk</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cdrom</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>floppy</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>lun</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>fdc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>sata</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </disk>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <graphics supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vnc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egl-headless</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </graphics>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <video supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='modelType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vga</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>cirrus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>none</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>bochs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ramfb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </video>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hostdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='mode'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>subsystem</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='startupPolicy'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>mandatory</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>requisite</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>optional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='subsysType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pci</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>scsi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='capsType'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='pciBackend'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hostdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <rng supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtio-non-transitional</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>random</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>egd</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </rng>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <filesystem supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='driverType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>path</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>handle</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>virtiofs</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </filesystem>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <tpm supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-tis</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tpm-crb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emulator</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>external</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendVersion'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>2.0</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </tpm>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <redirdev supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='bus'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>usb</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </redirdev>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <channel supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </channel>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <crypto supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendModel'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>builtin</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </crypto>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <interface supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='backendType'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>default</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>passt</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </interface>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <panic supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='model'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>isa</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>hyperv</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </panic>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <console supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='type'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>null</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vc</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pty</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dev</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>file</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>pipe</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stdio</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>udp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tcp</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>unix</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>qemu-vdagent</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>dbus</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </console>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </devices>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   <features>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <gic supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <vmcoreinfo supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <genid supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backingStoreInput supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <backup supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <async-teardown supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <ps2 supported='yes'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sev supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <sgx supported='no'/>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <hyperv supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='features'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>relaxed</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vapic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>spinlocks</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vpindex</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>runtime</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>synic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>stimer</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reset</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>vendor_id</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>frequencies</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>reenlightenment</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tlbflush</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>ipi</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>avic</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>emsr_bitmap</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>xmm_input</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <spinlocks>4095</spinlocks>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <stimer_direct>on</stimer_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </defaults>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </hyperv>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     <launchSecurity supported='yes'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       <enum name='sectype'>
Nov 22 05:45:12 compute-0 nova_compute[255660]:         <value>tdx</value>
Nov 22 05:45:12 compute-0 nova_compute[255660]:       </enum>
Nov 22 05:45:12 compute-0 nova_compute[255660]:     </launchSecurity>
Nov 22 05:45:12 compute-0 nova_compute[255660]:   </features>
Nov 22 05:45:12 compute-0 nova_compute[255660]: </domainCapabilities>
Nov 22 05:45:12 compute-0 nova_compute[255660]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.166 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.167 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Secure Boot support detected
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.169 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.170 255664 INFO nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.178 255664 DEBUG nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.205 255664 INFO nova.virt.node [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Determined node identity 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from /var/lib/nova/compute_id
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.228 255664 WARNING nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute nodes ['7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.261 255664 INFO nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.278 255664 WARNING nova.compute.manager [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.279 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.279 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.280 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.280 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.281 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:45:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:45:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508448704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.699 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.882 255664 WARNING nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5164MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.883 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.898 255664 WARNING nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] No compute node record for compute-0.ctlplane.example.com:7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 could not be found.
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.914 255664 INFO nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.984 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:45:12 compute-0 nova_compute[255660]: 2025-11-22 05:45:12.984 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:45:13 compute-0 ceph-mon[75840]: pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2508448704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:13 compute-0 nova_compute[255660]: 2025-11-22 05:45:13.943 255664 INFO nova.scheduler.client.report [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] [req-304c1646-fc5e-4a45-b4f8-71dd3059107d] Created resource provider record via placement API for resource provider with UUID 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 and name compute-0.ctlplane.example.com.
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.294 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:45:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:45:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351671620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.733 255664 DEBUG oslo_concurrency.processutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.740 255664 DEBUG nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 05:45:14 compute-0 nova_compute[255660]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.740 255664 INFO nova.virt.libvirt.host [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] kernel doesn't support AMD SEV
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.741 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.742 255664 DEBUG nova.virt.libvirt.driver [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.812 255664 DEBUG nova.scheduler.client.report [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updated inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.813 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.813 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 05:45:14 compute-0 nova_compute[255660]: 2025-11-22 05:45:14.971 255664 DEBUG nova.compute.provider_tree [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Updating resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 05:45:15 compute-0 nova_compute[255660]: 2025-11-22 05:45:15.017 255664 DEBUG nova.compute.resource_tracker [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:45:15 compute-0 nova_compute[255660]: 2025-11-22 05:45:15.018 255664 DEBUG oslo_concurrency.lockutils [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:45:15 compute-0 nova_compute[255660]: 2025-11-22 05:45:15.019 255664 DEBUG nova.service [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 22 05:45:15 compute-0 nova_compute[255660]: 2025-11-22 05:45:15.412 255664 DEBUG nova.service [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 22 05:45:15 compute-0 nova_compute[255660]: 2025-11-22 05:45:15.413 255664 DEBUG nova.servicegroup.drivers.db [None req-b040ffd8-f6ba-44ea-8134-8211128c3206 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 22 05:45:15 compute-0 ceph-mon[75840]: pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2351671620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:45:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:17 compute-0 podman[256000]: 2025-11-22 05:45:17.23962222 +0000 UTC m=+0.093737291 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:45:17 compute-0 podman[256020]: 2025-11-22 05:45:17.337339026 +0000 UTC m=+0.064552093 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:45:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:17 compute-0 ceph-mon[75840]: pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:19 compute-0 ceph-mon[75840]: pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:21 compute-0 ceph-mon[75840]: pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:23 compute-0 ceph-mon[75840]: pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:25 compute-0 ceph-mon[75840]: pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:27 compute-0 ceph-mon[75840]: pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:29 compute-0 ceph-mon[75840]: pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:30 compute-0 ceph-mon[75840]: pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:31 compute-0 nova_compute[255660]: 2025-11-22 05:45:31.415 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:45:31 compute-0 nova_compute[255660]: 2025-11-22 05:45:31.449 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:45:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:45:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:45:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:33 compute-0 ceph-mon[75840]: pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:33 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:33 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2533388122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:45:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:45:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:45:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:45:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/3722392471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:45:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2825550237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:45:35 compute-0 ceph-mon[75840]: pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:36 compute-0 podman[256038]: 2025-11-22 05:45:36.282744122 +0000 UTC m=+0.147568236 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:45:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:45:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:45:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:45:36.909 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:45:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:37 compute-0 ceph-mon[75840]: pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:38 compute-0 sudo[256064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:38 compute-0 sudo[256064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256064]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:45:38 compute-0 sudo[256089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256089]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:38 compute-0 sudo[256114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256114]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 05:45:38 compute-0 sudo[256139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:38 compute-0 sudo[256139]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:45:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:45:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:38 compute-0 sudo[256183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:38 compute-0 sudo[256183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256183]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:45:38 compute-0 sudo[256208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256208]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:38 compute-0 sudo[256233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:38 compute-0 sudo[256233]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:38 compute-0 sudo[256258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:45:38 compute-0 sudo[256258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:39 compute-0 sudo[256258]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 265f5846-4d0f-45fb-8dad-b8aa3320248f does not exist
Nov 22 05:45:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4d529e8a-b83c-43db-a206-d7ea20f58536 does not exist
Nov 22 05:45:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 18602d0c-fed7-4420-ac7b-53346e88b9d8 does not exist
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:45:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:45:39 compute-0 sudo[256314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:39 compute-0 sudo[256314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:39 compute-0 sudo[256314]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:39 compute-0 sudo[256339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:45:39 compute-0 sudo[256339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:39 compute-0 sudo[256339]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:39 compute-0 sudo[256364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:39 compute-0 sudo[256364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:39 compute-0 sudo[256364]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:39 compute-0 ceph-mon[75840]: pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:45:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:45:39 compute-0 sudo[256389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:45:39 compute-0 sudo[256389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.075465729 +0000 UTC m=+0.058375757 container create a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:45:40 compute-0 systemd[1]: Started libpod-conmon-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope.
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.048535231 +0000 UTC m=+0.031445289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.196823976 +0000 UTC m=+0.179734024 container init a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.204813249 +0000 UTC m=+0.187723287 container start a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.209987147 +0000 UTC m=+0.192897225 container attach a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:45:40 compute-0 musing_khorana[256471]: 167 167
Nov 22 05:45:40 compute-0 systemd[1]: libpod-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope: Deactivated successfully.
Nov 22 05:45:40 compute-0 conmon[256471]: conmon a1990ebd488b8ba07348 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope/container/memory.events
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.212178486 +0000 UTC m=+0.195088524 container died a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65825b4d482a7168459bfd24d952b714823efeaf398e00ca899008346aa8608-merged.mount: Deactivated successfully.
Nov 22 05:45:40 compute-0 podman[256455]: 2025-11-22 05:45:40.285556042 +0000 UTC m=+0.268466090 container remove a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 05:45:40 compute-0 systemd[1]: libpod-conmon-a1990ebd488b8ba07348c0109d6737bf6c81c412174623a2f0b1e32bd7cce275.scope: Deactivated successfully.
Nov 22 05:45:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:40 compute-0 podman[256495]: 2025-11-22 05:45:40.554586718 +0000 UTC m=+0.089325653 container create a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:45:40 compute-0 podman[256495]: 2025-11-22 05:45:40.498978795 +0000 UTC m=+0.033717770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:40 compute-0 systemd[1]: Started libpod-conmon-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope.
Nov 22 05:45:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:40 compute-0 podman[256495]: 2025-11-22 05:45:40.721204502 +0000 UTC m=+0.255943447 container init a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:45:40 compute-0 podman[256495]: 2025-11-22 05:45:40.730382387 +0000 UTC m=+0.265121312 container start a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:45:40 compute-0 podman[256495]: 2025-11-22 05:45:40.747512343 +0000 UTC m=+0.282251288 container attach a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:45:40 compute-0 ceph-mon[75840]: pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:41 compute-0 gifted_aryabhata[256512]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:45:41 compute-0 gifted_aryabhata[256512]: --> relative data size: 1.0
Nov 22 05:45:41 compute-0 gifted_aryabhata[256512]: --> All data devices are unavailable
Nov 22 05:45:41 compute-0 systemd[1]: libpod-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Deactivated successfully.
Nov 22 05:45:41 compute-0 systemd[1]: libpod-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Consumed 1.055s CPU time.
Nov 22 05:45:41 compute-0 podman[256541]: 2025-11-22 05:45:41.895530413 +0000 UTC m=+0.027796623 container died a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce51ccbf96f63996f0b2a034ec53f1cf9cbf148733576ee6d582a063d1cbf32f-merged.mount: Deactivated successfully.
Nov 22 05:45:41 compute-0 podman[256541]: 2025-11-22 05:45:41.983217972 +0000 UTC m=+0.115484142 container remove a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:45:41 compute-0 systemd[1]: libpod-conmon-a0566eeec121f5f72166a148bb8861e1982370c725ce5d9530f3d1e2d92912cc.scope: Deactivated successfully.
Nov 22 05:45:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:45:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5626 writes, 23K keys, 5626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5626 writes, 880 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:45:42 compute-0 sudo[256389]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:42 compute-0 sudo[256556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:42 compute-0 sudo[256556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:42 compute-0 sudo[256556]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:42 compute-0 sudo[256581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:45:42 compute-0 sudo[256581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:42 compute-0 sudo[256581]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:42 compute-0 sudo[256606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:42 compute-0 sudo[256606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:42 compute-0 sudo[256606]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:42 compute-0 sudo[256631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:45:42 compute-0 sudo[256631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.785585292 +0000 UTC m=+0.072764792 container create f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:45:42 compute-0 systemd[1]: Started libpod-conmon-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope.
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.757566154 +0000 UTC m=+0.044745704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.868234056 +0000 UTC m=+0.155413546 container init f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.878978073 +0000 UTC m=+0.166157573 container start f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:45:42 compute-0 kind_edison[256712]: 167 167
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.884058588 +0000 UTC m=+0.171238048 container attach f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:45:42 compute-0 systemd[1]: libpod-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope: Deactivated successfully.
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.885095506 +0000 UTC m=+0.172275036 container died f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f0c7c48a6378898dd9c3df82cfa8b52ae6c9c846ec0e844651cc4abb95b4c2-merged.mount: Deactivated successfully.
Nov 22 05:45:42 compute-0 podman[256696]: 2025-11-22 05:45:42.932663674 +0000 UTC m=+0.219843144 container remove f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:45:42 compute-0 systemd[1]: libpod-conmon-f1b0224dd18aac30925332e8ba0d98347338515ed059a78d38f59dcf075249ca.scope: Deactivated successfully.
Nov 22 05:45:43 compute-0 podman[256735]: 2025-11-22 05:45:43.12334848 +0000 UTC m=+0.056737694 container create b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:45:43 compute-0 systemd[1]: Started libpod-conmon-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope.
Nov 22 05:45:43 compute-0 podman[256735]: 2025-11-22 05:45:43.099625657 +0000 UTC m=+0.033014851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:43 compute-0 podman[256735]: 2025-11-22 05:45:43.240112874 +0000 UTC m=+0.173502068 container init b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:45:43 compute-0 podman[256735]: 2025-11-22 05:45:43.247019608 +0000 UTC m=+0.180408782 container start b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:45:43 compute-0 podman[256735]: 2025-11-22 05:45:43.251066396 +0000 UTC m=+0.184455600 container attach b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:45:43 compute-0 ceph-mon[75840]: pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:45:43
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes']
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:45:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:45:44 compute-0 youthful_brattain[256751]: {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     "0": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "devices": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "/dev/loop3"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             ],
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_name": "ceph_lv0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_size": "21470642176",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "name": "ceph_lv0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "tags": {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_name": "ceph",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.crush_device_class": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.encrypted": "0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_id": "0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.vdo": "0"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             },
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "vg_name": "ceph_vg0"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         }
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     ],
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     "1": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "devices": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "/dev/loop4"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             ],
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_name": "ceph_lv1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_size": "21470642176",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "name": "ceph_lv1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "tags": {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_name": "ceph",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.crush_device_class": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.encrypted": "0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_id": "1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.vdo": "0"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             },
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "vg_name": "ceph_vg1"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         }
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     ],
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     "2": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "devices": [
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "/dev/loop5"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             ],
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_name": "ceph_lv2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_size": "21470642176",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "name": "ceph_lv2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "tags": {
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.cluster_name": "ceph",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.crush_device_class": "",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.encrypted": "0",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osd_id": "2",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:                 "ceph.vdo": "0"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             },
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "type": "block",
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:             "vg_name": "ceph_vg2"
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:         }
Nov 22 05:45:44 compute-0 youthful_brattain[256751]:     ]
Nov 22 05:45:44 compute-0 youthful_brattain[256751]: }
Nov 22 05:45:44 compute-0 systemd[1]: libpod-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope: Deactivated successfully.
Nov 22 05:45:44 compute-0 podman[256735]: 2025-11-22 05:45:44.054561257 +0000 UTC m=+0.987950431 container died b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c836e763e25fc696e8998b12e207f8428d1ad5bb30c57335d44165d52ea09cb4-merged.mount: Deactivated successfully.
Nov 22 05:45:44 compute-0 podman[256735]: 2025-11-22 05:45:44.119418676 +0000 UTC m=+1.052807850 container remove b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_brattain, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:45:44 compute-0 systemd[1]: libpod-conmon-b9ac66b16b7b12bbef6152b52f3222a601e193902b246d53fb5dab6013b5d867.scope: Deactivated successfully.
Nov 22 05:45:44 compute-0 sudo[256631]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:44 compute-0 sudo[256774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:44 compute-0 sudo[256774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:44 compute-0 sudo[256774]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:44 compute-0 sudo[256799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:45:44 compute-0 sudo[256799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:44 compute-0 sudo[256799]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:44 compute-0 sudo[256824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:44 compute-0 sudo[256824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:44 compute-0 sudo[256824]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:44 compute-0 sudo[256849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:45:44 compute-0 sudo[256849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.788363028 +0000 UTC m=+0.045582276 container create b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:45:44 compute-0 systemd[1]: Started libpod-conmon-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope.
Nov 22 05:45:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.767907793 +0000 UTC m=+0.025127011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.884911003 +0000 UTC m=+0.142130221 container init b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.891866509 +0000 UTC m=+0.149085737 container start b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:45:44 compute-0 nervous_torvalds[256930]: 167 167
Nov 22 05:45:44 compute-0 systemd[1]: libpod-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope: Deactivated successfully.
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.895855235 +0000 UTC m=+0.153074453 container attach b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.899834981 +0000 UTC m=+0.157054189 container died b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c92bfb74998819e55c4833661c832a638674285dafd6f52934489e0e23087c-merged.mount: Deactivated successfully.
Nov 22 05:45:44 compute-0 podman[256914]: 2025-11-22 05:45:44.936313884 +0000 UTC m=+0.193533082 container remove b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:45:44 compute-0 systemd[1]: libpod-conmon-b390773fe5b96470ea541ae9f49e01794cc58514a202940f5e8a4139a85ac243.scope: Deactivated successfully.
Nov 22 05:45:45 compute-0 podman[256952]: 2025-11-22 05:45:45.145259087 +0000 UTC m=+0.064684226 container create ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:45:45 compute-0 systemd[1]: Started libpod-conmon-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope.
Nov 22 05:45:45 compute-0 podman[256952]: 2025-11-22 05:45:45.117354013 +0000 UTC m=+0.036779152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:45:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:45:45 compute-0 podman[256952]: 2025-11-22 05:45:45.248268845 +0000 UTC m=+0.167693964 container init ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:45:45 compute-0 podman[256952]: 2025-11-22 05:45:45.259686289 +0000 UTC m=+0.179111418 container start ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:45:45 compute-0 podman[256952]: 2025-11-22 05:45:45.266604174 +0000 UTC m=+0.186029283 container attach ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:45:45 compute-0 ceph-mon[75840]: pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]: {
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_id": 1,
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "type": "bluestore"
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     },
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_id": 2,
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "type": "bluestore"
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     },
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_id": 0,
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:         "type": "bluestore"
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]:     }
Nov 22 05:45:46 compute-0 nervous_wescoff[256970]: }
Nov 22 05:45:46 compute-0 systemd[1]: libpod-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Deactivated successfully.
Nov 22 05:45:46 compute-0 systemd[1]: libpod-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Consumed 1.068s CPU time.
Nov 22 05:45:46 compute-0 podman[256952]: 2025-11-22 05:45:46.321025266 +0000 UTC m=+1.240450395 container died ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:45:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e0956f779bccfc7704f0b543dad58bca25907d002aa8eba4bf532ecc1561346-merged.mount: Deactivated successfully.
Nov 22 05:45:46 compute-0 podman[256952]: 2025-11-22 05:45:46.606623444 +0000 UTC m=+1.526048573 container remove ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:45:46 compute-0 systemd[1]: libpod-conmon-ee40cef8dcdb6357f18900f3000672c94f2e53c5d9819bf7ade3ef30bbaa8c5e.scope: Deactivated successfully.
Nov 22 05:45:46 compute-0 sudo[256849]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:45:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:45:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:46 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ebfd965-df5f-40f0-a2b5-d3d03a24ae1e does not exist
Nov 22 05:45:46 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9e1c12af-94bc-4638-a65d-8ca60c609d18 does not exist
Nov 22 05:45:46 compute-0 sudo[257015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:45:46 compute-0 sudo[257015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:46 compute-0 sudo[257015]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:46 compute-0 sudo[257040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:45:46 compute-0 sudo[257040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:45:46 compute-0 sudo[257040]: pam_unix(sudo:session): session closed for user root
Nov 22 05:45:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:45:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 6951 writes, 28K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6951 writes, 1245 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:45:47 compute-0 ceph-mon[75840]: pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:45:48 compute-0 podman[257066]: 2025-11-22 05:45:48.220230119 +0000 UTC m=+0.068695103 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 05:45:48 compute-0 podman[257065]: 2025-11-22 05:45:48.242420541 +0000 UTC m=+0.091473670 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 05:45:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:48 compute-0 ceph-mon[75840]: pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:51 compute-0 ceph-mon[75840]: pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:45:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:45:53 compute-0 ceph-mon[75840]: pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:45:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 05:45:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:55 compute-0 ceph-mon[75840]: pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:56 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 05:45:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:45:57 compute-0 ceph-mon[75840]: pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:45:59 compute-0 ceph-mon[75840]: pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:01 compute-0 ceph-mon[75840]: pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:01 compute-0 sshd-session[257102]: Invalid user solana from 80.94.92.166 port 36078
Nov 22 05:46:01 compute-0 sshd-session[257102]: Connection closed by invalid user solana 80.94.92.166 port 36078 [preauth]
Nov 22 05:46:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:03 compute-0 ceph-mon[75840]: pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:05 compute-0 ceph-mon[75840]: pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:07 compute-0 podman[257104]: 2025-11-22 05:46:07.25563454 +0000 UTC m=+0.112987432 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 05:46:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:07 compute-0 ceph-mon[75840]: pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.643597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367643663, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1519, "num_deletes": 251, "total_data_size": 2495675, "memory_usage": 2525216, "flush_reason": "Manual Compaction"}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367661235, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2440715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14828, "largest_seqno": 16346, "table_properties": {"data_size": 2433615, "index_size": 4171, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14340, "raw_average_key_size": 19, "raw_value_size": 2419448, "raw_average_value_size": 3318, "num_data_blocks": 191, "num_entries": 729, "num_filter_entries": 729, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790203, "oldest_key_time": 1763790203, "file_creation_time": 1763790367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17738 microseconds, and 10591 cpu microseconds.
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.661330) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2440715 bytes OK
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.661373) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663712) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663738) EVENT_LOG_v1 {"time_micros": 1763790367663729, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.663770) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2489039, prev total WAL file size 2489039, number of live WAL files 2.
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.665267) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2383KB)], [35(6852KB)]
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367665343, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9457323, "oldest_snapshot_seqno": -1}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4003 keys, 7683708 bytes, temperature: kUnknown
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367705161, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7683708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7654650, "index_size": 17940, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97835, "raw_average_key_size": 24, "raw_value_size": 7579911, "raw_average_value_size": 1893, "num_data_blocks": 759, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.705667) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7683708 bytes
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.706887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.1 rd, 192.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.7 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4517, records dropped: 514 output_compression: NoCompression
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.706905) EVENT_LOG_v1 {"time_micros": 1763790367706896, "job": 16, "event": "compaction_finished", "compaction_time_micros": 39883, "compaction_time_cpu_micros": 18622, "output_level": 6, "num_output_files": 1, "total_output_size": 7683708, "num_input_records": 4517, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367707406, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790367708917, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.665132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:07 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:46:07.709060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:46:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:09 compute-0 ceph-mon[75840]: pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.132 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.133 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.134 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.150 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.151 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.151 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.152 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.152 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.153 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.153 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.154 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.179 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.180 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.181 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.181 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.182 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:46:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:46:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466684376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:46:11 compute-0 nova_compute[255660]: 2025-11-22 05:46:11.644 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:46:11 compute-0 ceph-mon[75840]: pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/466684376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:46:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:13 compute-0 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 22 05:46:13 compute-0 irqbalance[791]: IRQ 26 affinity is now unmanaged
Nov 22 05:46:13 compute-0 ceph-mon[75840]: pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 05:46:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2149655122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2149655122' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 05:46:14 compute-0 ceph-mon[75840]: from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.027 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.030 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.030 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.031 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.170 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.171 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.213 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:46:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:46:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847256402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:46:15 compute-0 ceph-mon[75840]: pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3847256402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.694 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.699 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.716 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.717 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:46:15 compute-0 nova_compute[255660]: 2025-11-22 05:46:15.717 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:46:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:17 compute-0 ceph-mon[75840]: pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:19 compute-0 podman[257174]: 2025-11-22 05:46:19.206239349 +0000 UTC m=+0.067398498 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:46:19 compute-0 podman[257175]: 2025-11-22 05:46:19.221367305 +0000 UTC m=+0.075171837 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:46:19 compute-0 ceph-mon[75840]: pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:21 compute-0 ceph-mon[75840]: pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:23 compute-0 ceph-mon[75840]: pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:25 compute-0 ceph-mon[75840]: pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:26 compute-0 sshd[191303]: Timeout before authentication for connection from 123.253.22.30 to 38.102.83.23, pid = 250120
Nov 22 05:46:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:27 compute-0 ceph-mon[75840]: pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 05:46:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 05:46:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 05:46:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 05:46:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 05:46:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:28 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 05:46:29 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 05:46:29 compute-0 ceph-mon[75840]: pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:30 compute-0 ceph-mon[75840]: pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:33 compute-0 ceph-mon[75840]: pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:35 compute-0 ceph-mon[75840]: pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.910 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:46:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.911 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:46:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:46:36.911 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:46:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:37 compute-0 ceph-mon[75840]: pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:38 compute-0 podman[257212]: 2025-11-22 05:46:38.29617817 +0000 UTC m=+0.144155257 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 05:46:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:39 compute-0 ceph-mon[75840]: pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:41 compute-0 ceph-mon[75840]: pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:43 compute-0 ceph-mon[75840]: pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:46:43
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control']
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:46:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:46:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:45 compute-0 ceph-mon[75840]: pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:46 compute-0 sudo[257238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:46 compute-0 sudo[257238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:46 compute-0 sudo[257238]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 sudo[257263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:46:47 compute-0 sudo[257263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 sudo[257263]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 sudo[257288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:47 compute-0 sudo[257288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 sudo[257288]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:46:47 compute-0 sudo[257313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:46:47 compute-0 sudo[257313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:47 compute-0 ceph-mon[75840]: pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1533659100' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:46:47 compute-0 sudo[257313]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:47 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5b185e99-76ae-414e-9d5c-728ee8fbbdab does not exist
Nov 22 05:46:47 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 21b0d40e-9696-43c6-be87-77b99338a1ac does not exist
Nov 22 05:46:47 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev cbffa9dd-e575-4e6b-ae90-249d99a46d14 does not exist
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:46:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:46:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:46:47 compute-0 sudo[257369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:47 compute-0 sudo[257369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 sudo[257369]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 sudo[257394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:46:47 compute-0 sudo[257394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 sudo[257394]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 sudo[257419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:47 compute-0 sudo[257419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:47 compute-0 sudo[257419]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:47 compute-0 sudo[257444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:46:47 compute-0 sudo[257444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.303495507 +0000 UTC m=+0.043583959 container create 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:46:48 compute-0 systemd[1]: Started libpod-conmon-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope.
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.280797029 +0000 UTC m=+0.020885461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.404200688 +0000 UTC m=+0.144289190 container init 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.416222461 +0000 UTC m=+0.156310913 container start 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.420113855 +0000 UTC m=+0.160202377 container attach 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:46:48 compute-0 recursing_kilby[257525]: 167 167
Nov 22 05:46:48 compute-0 systemd[1]: libpod-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope: Deactivated successfully.
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.425299664 +0000 UTC m=+0.165388076 container died 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:46:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-46b30b747761560fc85b49b5cc29fbf702e04d700d72ab2f037092e1fd3ad09f-merged.mount: Deactivated successfully.
Nov 22 05:46:48 compute-0 podman[257509]: 2025-11-22 05:46:48.471162884 +0000 UTC m=+0.211251306 container remove 309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_kilby, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:46:48 compute-0 systemd[1]: libpod-conmon-309cbf1c9a9f6639fc9286d5a057c5980c9dc1b0f581332d89ef46a46d940e26.scope: Deactivated successfully.
Nov 22 05:46:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:46:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:46:48 compute-0 podman[257549]: 2025-11-22 05:46:48.690522577 +0000 UTC m=+0.056788195 container create eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:46:48 compute-0 systemd[1]: Started libpod-conmon-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope.
Nov 22 05:46:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:48 compute-0 podman[257549]: 2025-11-22 05:46:48.670318754 +0000 UTC m=+0.036584412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:48 compute-0 podman[257549]: 2025-11-22 05:46:48.77232659 +0000 UTC m=+0.138592248 container init eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:46:48 compute-0 podman[257549]: 2025-11-22 05:46:48.779804531 +0000 UTC m=+0.146070139 container start eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:46:48 compute-0 podman[257549]: 2025-11-22 05:46:48.782978206 +0000 UTC m=+0.149243854 container attach eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:46:49 compute-0 ceph-mon[75840]: pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:49 compute-0 festive_noether[257566]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:46:49 compute-0 festive_noether[257566]: --> relative data size: 1.0
Nov 22 05:46:49 compute-0 festive_noether[257566]: --> All data devices are unavailable
Nov 22 05:46:49 compute-0 systemd[1]: libpod-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Deactivated successfully.
Nov 22 05:46:49 compute-0 systemd[1]: libpod-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Consumed 1.069s CPU time.
Nov 22 05:46:49 compute-0 podman[257549]: 2025-11-22 05:46:49.886035896 +0000 UTC m=+1.252301574 container died eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:46:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b08519998358ce1b4eecaa6ceca10251afccc5e2ec7afa15ba79a7e005dc02c-merged.mount: Deactivated successfully.
Nov 22 05:46:49 compute-0 podman[257549]: 2025-11-22 05:46:49.970524491 +0000 UTC m=+1.336790099 container remove eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:46:49 compute-0 systemd[1]: libpod-conmon-eb5678646b03d1b287db4e05c57f673c430ee6962bee68a24adc640e4127916c.scope: Deactivated successfully.
Nov 22 05:46:49 compute-0 sudo[257444]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:50 compute-0 podman[257595]: 2025-11-22 05:46:50.004691308 +0000 UTC m=+0.083288735 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 05:46:50 compute-0 podman[257603]: 2025-11-22 05:46:50.030420168 +0000 UTC m=+0.107059413 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:46:50 compute-0 sudo[257646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:50 compute-0 sudo[257646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:50 compute-0 sudo[257646]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:50 compute-0 sudo[257671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:46:50 compute-0 sudo[257671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:50 compute-0 sudo[257671]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:50 compute-0 sudo[257696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:50 compute-0 sudo[257696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:50 compute-0 sudo[257696]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:50 compute-0 sudo[257721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:46:50 compute-0 sudo[257721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.042341634 +0000 UTC m=+0.066828194 container create 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:46:51 compute-0 systemd[1]: Started libpod-conmon-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope.
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.014389044 +0000 UTC m=+0.038875654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.147778851 +0000 UTC m=+0.172265451 container init 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.160359728 +0000 UTC m=+0.184846288 container start 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.164829188 +0000 UTC m=+0.189315728 container attach 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:46:51 compute-0 dreamy_perlman[257803]: 167 167
Nov 22 05:46:51 compute-0 systemd[1]: libpod-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope: Deactivated successfully.
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.168857116 +0000 UTC m=+0.193343686 container died 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-602f0a0012665a25b52c374a343500ce1061b96959c0d37028ff9ad6fc4361eb-merged.mount: Deactivated successfully.
Nov 22 05:46:51 compute-0 podman[257787]: 2025-11-22 05:46:51.218184259 +0000 UTC m=+0.242670789 container remove 34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:46:51 compute-0 systemd[1]: libpod-conmon-34fa23cdd93356000415a7c2a2436083c56dd9abb18c917e91124920e110c920.scope: Deactivated successfully.
Nov 22 05:46:51 compute-0 podman[257827]: 2025-11-22 05:46:51.45018312 +0000 UTC m=+0.058770017 container create eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:46:51 compute-0 systemd[1]: Started libpod-conmon-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope.
Nov 22 05:46:51 compute-0 podman[257827]: 2025-11-22 05:46:51.420680489 +0000 UTC m=+0.029267496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:51 compute-0 podman[257827]: 2025-11-22 05:46:51.571646298 +0000 UTC m=+0.180233235 container init eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:46:51 compute-0 podman[257827]: 2025-11-22 05:46:51.580728411 +0000 UTC m=+0.189315338 container start eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:46:51 compute-0 podman[257827]: 2025-11-22 05:46:51.584442611 +0000 UTC m=+0.193029528 container attach eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Nov 22 05:46:51 compute-0 ceph-mon[75840]: pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:52 compute-0 epic_bose[257843]: {
Nov 22 05:46:52 compute-0 epic_bose[257843]:     "0": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:         {
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "devices": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "/dev/loop3"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             ],
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_name": "ceph_lv0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_size": "21470642176",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "name": "ceph_lv0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "tags": {
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.crush_device_class": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.encrypted": "0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_id": "0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.vdo": "0"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             },
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "vg_name": "ceph_vg0"
Nov 22 05:46:52 compute-0 epic_bose[257843]:         }
Nov 22 05:46:52 compute-0 epic_bose[257843]:     ],
Nov 22 05:46:52 compute-0 epic_bose[257843]:     "1": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:         {
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "devices": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "/dev/loop4"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             ],
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_name": "ceph_lv1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_size": "21470642176",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "name": "ceph_lv1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "tags": {
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.crush_device_class": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.encrypted": "0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_id": "1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.vdo": "0"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             },
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "vg_name": "ceph_vg1"
Nov 22 05:46:52 compute-0 epic_bose[257843]:         }
Nov 22 05:46:52 compute-0 epic_bose[257843]:     ],
Nov 22 05:46:52 compute-0 epic_bose[257843]:     "2": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:         {
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "devices": [
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "/dev/loop5"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             ],
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_name": "ceph_lv2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_size": "21470642176",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "name": "ceph_lv2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "tags": {
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.crush_device_class": "",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.encrypted": "0",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osd_id": "2",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:                 "ceph.vdo": "0"
Nov 22 05:46:52 compute-0 epic_bose[257843]:             },
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "type": "block",
Nov 22 05:46:52 compute-0 epic_bose[257843]:             "vg_name": "ceph_vg2"
Nov 22 05:46:52 compute-0 epic_bose[257843]:         }
Nov 22 05:46:52 compute-0 epic_bose[257843]:     ]
Nov 22 05:46:52 compute-0 epic_bose[257843]: }
Nov 22 05:46:52 compute-0 systemd[1]: libpod-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope: Deactivated successfully.
Nov 22 05:46:52 compute-0 podman[257827]: 2025-11-22 05:46:52.367017976 +0000 UTC m=+0.975604863 container died eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1de036f40484d8d6adbca8a50fd3b50fc8782dc89cb907cfcb829a8e5da9b368-merged.mount: Deactivated successfully.
Nov 22 05:46:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:52 compute-0 podman[257827]: 2025-11-22 05:46:52.428139625 +0000 UTC m=+1.036726522 container remove eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:46:52 compute-0 systemd[1]: libpod-conmon-eed6f47a5c914a6566d95a03979b310d3b12cba630c48e0b07f21dab13a86773.scope: Deactivated successfully.
Nov 22 05:46:52 compute-0 sudo[257721]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:52 compute-0 sudo[257863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:52 compute-0 sudo[257863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:52 compute-0 sudo[257863]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:52 compute-0 sudo[257888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:46:52 compute-0 sudo[257888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:52 compute-0 sudo[257888]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:52 compute-0 sudo[257913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:52 compute-0 sudo[257913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:52 compute-0 sudo[257913]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:52 compute-0 sudo[257938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:46:52 compute-0 sudo[257938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:46:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.155590433 +0000 UTC m=+0.062285322 container create c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:46:53 compute-0 systemd[1]: Started libpod-conmon-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope.
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.129263567 +0000 UTC m=+0.035958496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.244627141 +0000 UTC m=+0.151321990 container init c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.252358108 +0000 UTC m=+0.159052967 container start c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.256143079 +0000 UTC m=+0.162837928 container attach c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:46:53 compute-0 exciting_ardinghelli[258021]: 167 167
Nov 22 05:46:53 compute-0 systemd[1]: libpod-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope: Deactivated successfully.
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.258107332 +0000 UTC m=+0.164802191 container died c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-97cfcc02437fb508e0febf07d93fc162efeca3352844c5372d27fc8da873b7e5-merged.mount: Deactivated successfully.
Nov 22 05:46:53 compute-0 podman[258004]: 2025-11-22 05:46:53.303648523 +0000 UTC m=+0.210343372 container remove c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:46:53 compute-0 systemd[1]: libpod-conmon-c35d2d87bdec2e54488bd5d8e344bbc97f1de49f1c53975e26ce96a3cf46b0dd.scope: Deactivated successfully.
Nov 22 05:46:53 compute-0 podman[258046]: 2025-11-22 05:46:53.458722331 +0000 UTC m=+0.041756720 container create 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:46:53 compute-0 systemd[1]: Started libpod-conmon-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope.
Nov 22 05:46:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:53 compute-0 podman[258046]: 2025-11-22 05:46:53.443144103 +0000 UTC m=+0.026178512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:46:53 compute-0 podman[258046]: 2025-11-22 05:46:53.553588235 +0000 UTC m=+0.136622674 container init 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:46:53 compute-0 podman[258046]: 2025-11-22 05:46:53.565035702 +0000 UTC m=+0.148070121 container start 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:46:53 compute-0 podman[258046]: 2025-11-22 05:46:53.569418549 +0000 UTC m=+0.152452968 container attach 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:46:53 compute-0 ceph-mon[75840]: pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:54 compute-0 nifty_bell[258062]: {
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_id": 1,
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "type": "bluestore"
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     },
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_id": 2,
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "type": "bluestore"
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     },
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_id": 0,
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:46:54 compute-0 nifty_bell[258062]:         "type": "bluestore"
Nov 22 05:46:54 compute-0 nifty_bell[258062]:     }
Nov 22 05:46:54 compute-0 nifty_bell[258062]: }
Nov 22 05:46:54 compute-0 systemd[1]: libpod-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Deactivated successfully.
Nov 22 05:46:54 compute-0 systemd[1]: libpod-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Consumed 1.084s CPU time.
Nov 22 05:46:54 compute-0 podman[258046]: 2025-11-22 05:46:54.636731931 +0000 UTC m=+1.219766320 container died 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-745b35359c9c252d15fdf9bdd9a361ba997dc8ed67a645ec2791419bc23db0cc-merged.mount: Deactivated successfully.
Nov 22 05:46:54 compute-0 podman[258046]: 2025-11-22 05:46:54.716735407 +0000 UTC m=+1.299769806 container remove 42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:46:54 compute-0 systemd[1]: libpod-conmon-42ba5ace829abbc29d36064e5e31aff612ee7a4ff9d906fbc53ee0dcb0037356.scope: Deactivated successfully.
Nov 22 05:46:54 compute-0 sudo[257938]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:46:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:46:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:54 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5c3fc611-d1ef-45fa-ac2d-f75300d8187f does not exist
Nov 22 05:46:54 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev cd4c6266-9274-4a10-813d-4f78ffe25a6d does not exist
Nov 22 05:46:54 compute-0 sudo[258107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:46:54 compute-0 sudo[258107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:54 compute-0 sudo[258107]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:54 compute-0 sudo[258132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:46:54 compute-0 sudo[258132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:46:54 compute-0 sudo[258132]: pam_unix(sudo:session): session closed for user root
Nov 22 05:46:55 compute-0 ceph-mon[75840]: pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:46:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:56 compute-0 ceph-mon[75840]: pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:46:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:46:59 compute-0 ceph-mon[75840]: pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:01 compute-0 ceph-mon[75840]: pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:01 compute-0 anacron[30771]: Job `cron.monthly' started
Nov 22 05:47:01 compute-0 anacron[30771]: Job `cron.monthly' terminated
Nov 22 05:47:01 compute-0 anacron[30771]: Normal exit (3 jobs run)
Nov 22 05:47:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:03 compute-0 ceph-mon[75840]: pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:05 compute-0 ceph-mon[75840]: pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:07 compute-0 ceph-mon[75840]: pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:09 compute-0 podman[258159]: 2025-11-22 05:47:09.278232321 +0000 UTC m=+0.127973542 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:47:09 compute-0 ceph-mon[75840]: pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:11 compute-0 ceph-mon[75840]: pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:11 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.846 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:47:11 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.847 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:47:11 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:11.849 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:47:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:13 compute-0 ceph-mon[75840]: pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:15 compute-0 ceph-mon[75840]: pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.712 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.713 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.735 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.736 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.737 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.755 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.756 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.756 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.757 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.758 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.758 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.761 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.786 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.787 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.787 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.788 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:47:15 compute-0 nova_compute[255660]: 2025-11-22 05:47:15.788 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:47:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:47:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1773929974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.247 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.394 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.395 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.480 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.480 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.496 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:47:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 05:47:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1773929974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:47:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:47:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275177033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.955 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.963 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.979 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.982 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:47:16 compute-0 nova_compute[255660]: 2025-11-22 05:47:16.982 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:47:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:17 compute-0 ceph-mon[75840]: pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 22 05:47:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1275177033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:47:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 05:47:19 compute-0 ceph-mon[75840]: pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 05:47:20 compute-0 podman[258230]: 2025-11-22 05:47:20.227389355 +0000 UTC m=+0.073239145 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:47:20 compute-0 podman[258231]: 2025-11-22 05:47:20.249542299 +0000 UTC m=+0.093230552 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 05:47:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:21 compute-0 ceph-mon[75840]: pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:23 compute-0 ceph-mon[75840]: pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:24 compute-0 ceph-mon[75840]: pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:27 compute-0 ceph-mon[75840]: pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 05:47:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Nov 22 05:47:29 compute-0 ceph-mon[75840]: pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Nov 22 05:47:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 22 05:47:31 compute-0 ceph-mon[75840]: pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 22 05:47:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:33 compute-0 ceph-mon[75840]: pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:35 compute-0 ceph-mon[75840]: pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:47:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:47:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:47:36.912 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:47:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:37 compute-0 ceph-mon[75840]: pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:39 compute-0 ceph-mon[75840]: pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:40 compute-0 podman[258271]: 2025-11-22 05:47:40.253988792 +0000 UTC m=+0.115963661 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:47:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:41 compute-0 ceph-mon[75840]: pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:41 compute-0 sshd-session[258298]: Invalid user solv from 80.94.92.182 port 35342
Nov 22 05:47:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:42 compute-0 sshd-session[258298]: Connection closed by invalid user solv 80.94.92.182 port 35342 [preauth]
Nov 22 05:47:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:43 compute-0 ceph-mon[75840]: pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:47:43
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:47:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:47:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:45 compute-0 ceph-mon[75840]: pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:46 compute-0 ceph-mon[75840]: pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:47:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:47:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:47:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:47:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:47:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1276692147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:47:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:48 compute-0 ceph-mon[75840]: pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:51 compute-0 podman[258301]: 2025-11-22 05:47:51.203753633 +0000 UTC m=+0.058503560 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:47:51 compute-0 podman[258300]: 2025-11-22 05:47:51.211382308 +0000 UTC m=+0.063694449 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 05:47:51 compute-0 ceph-mon[75840]: pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:47:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:47:53 compute-0 ceph-mon[75840]: pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:55 compute-0 sudo[258341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:55 compute-0 sudo[258341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:55 compute-0 sudo[258341]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:55 compute-0 sudo[258366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:47:55 compute-0 sudo[258366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:55 compute-0 sudo[258366]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:55 compute-0 sudo[258391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:55 compute-0 sudo[258391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:55 compute-0 sudo[258391]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:55 compute-0 sudo[258416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:47:55 compute-0 sudo[258416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:55 compute-0 ceph-mon[75840]: pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:56 compute-0 podman[258512]: 2025-11-22 05:47:55.998901391 +0000 UTC m=+0.151370291 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:47:56 compute-0 podman[258512]: 2025-11-22 05:47:56.137536088 +0000 UTC m=+0.290005048 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:47:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:56 compute-0 sudo[258416]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:47:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:47:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:57 compute-0 sudo[258673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:57 compute-0 sudo[258673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:57 compute-0 sudo[258673]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:57 compute-0 sudo[258698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:47:57 compute-0 sudo[258698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:57 compute-0 sudo[258698]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:57 compute-0 sudo[258723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:57 compute-0 sudo[258723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:57 compute-0 sudo[258723]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:57 compute-0 sudo[258748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:47:57 compute-0 sudo[258748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:47:57 compute-0 ceph-mon[75840]: pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:57 compute-0 sudo[258748]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0c638df0-d6f5-4377-b1f9-1be760fd5958 does not exist
Nov 22 05:47:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d4c52154-f7a3-48ff-bf53-40205bd606a9 does not exist
Nov 22 05:47:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0e0df70b-a158-439b-ace5-e8ce98437250 does not exist
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:47:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:47:58 compute-0 sudo[258804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:58 compute-0 sudo[258804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:58 compute-0 sudo[258804]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:58 compute-0 sudo[258829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:47:58 compute-0 sudo[258829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:58 compute-0 sudo[258829]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:58 compute-0 sudo[258854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:47:58 compute-0 sudo[258854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:58 compute-0 sudo[258854]: pam_unix(sudo:session): session closed for user root
Nov 22 05:47:58 compute-0 sudo[258879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:47:58 compute-0 sudo[258879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:47:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.631213699 +0000 UTC m=+0.072824024 container create 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:47:58 compute-0 systemd[1]: Started libpod-conmon-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope.
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.586997934 +0000 UTC m=+0.028608329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:47:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.725208541 +0000 UTC m=+0.166818926 container init 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.738436745 +0000 UTC m=+0.180047060 container start 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:47:58 compute-0 adoring_gould[258961]: 167 167
Nov 22 05:47:58 compute-0 systemd[1]: libpod-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope: Deactivated successfully.
Nov 22 05:47:58 compute-0 conmon[258961]: conmon 35fd619446046537ec7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope/container/memory.events
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.754434633 +0000 UTC m=+0.196045028 container attach 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.75615184 +0000 UTC m=+0.197762195 container died 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:47:58 compute-0 ceph-mon[75840]: pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc4f8d18c6896a1f92095e41d2b2729b97fc1a389fcb79af4853dca8eee8a0d-merged.mount: Deactivated successfully.
Nov 22 05:47:58 compute-0 podman[258945]: 2025-11-22 05:47:58.838391335 +0000 UTC m=+0.280001660 container remove 35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:47:58 compute-0 systemd[1]: libpod-conmon-35fd619446046537ec7c2be671c7246167802b623c019b09260908bf5d2fd788.scope: Deactivated successfully.
Nov 22 05:47:59 compute-0 podman[258986]: 2025-11-22 05:47:59.086814347 +0000 UTC m=+0.066127314 container create f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:47:59 compute-0 systemd[1]: Started libpod-conmon-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope.
Nov 22 05:47:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:47:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:47:59 compute-0 podman[258986]: 2025-11-22 05:47:59.057899721 +0000 UTC m=+0.037212728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:47:59 compute-0 podman[258986]: 2025-11-22 05:47:59.166388611 +0000 UTC m=+0.145701598 container init f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:47:59 compute-0 podman[258986]: 2025-11-22 05:47:59.172873795 +0000 UTC m=+0.152186762 container start f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:47:59 compute-0 podman[258986]: 2025-11-22 05:47:59.186566832 +0000 UTC m=+0.165879819 container attach f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:48:00 compute-0 stupefied_mclaren[259002]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:48:00 compute-0 stupefied_mclaren[259002]: --> relative data size: 1.0
Nov 22 05:48:00 compute-0 stupefied_mclaren[259002]: --> All data devices are unavailable
Nov 22 05:48:00 compute-0 systemd[1]: libpod-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope: Deactivated successfully.
Nov 22 05:48:00 compute-0 podman[258986]: 2025-11-22 05:48:00.222137292 +0000 UTC m=+1.201450269 container died f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a1faa9aa9ef19708db9eb51fb3903c3055cc3e862ae850675954fa1bcd814d-merged.mount: Deactivated successfully.
Nov 22 05:48:00 compute-0 podman[258986]: 2025-11-22 05:48:00.293028033 +0000 UTC m=+1.272341000 container remove f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:48:00 compute-0 systemd[1]: libpod-conmon-f4f62b5c70dac14f2bcfb4a4c1d53008c154e91162ce8623684d54c3fdbfe020.scope: Deactivated successfully.
Nov 22 05:48:00 compute-0 sudo[258879]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:00 compute-0 sudo[259044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:48:00 compute-0 sudo[259044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:00 compute-0 sudo[259044]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:00 compute-0 sudo[259069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:48:00 compute-0 sudo[259069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:00 compute-0 sudo[259069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:00 compute-0 sudo[259094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:48:00 compute-0 sudo[259094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:00 compute-0 sudo[259094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:00 compute-0 sudo[259119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:48:00 compute-0 sudo[259119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.014157721 +0000 UTC m=+0.072350671 container create af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:00.979967584 +0000 UTC m=+0.038160544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:48:01 compute-0 systemd[1]: Started libpod-conmon-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope.
Nov 22 05:48:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.147864007 +0000 UTC m=+0.206056947 container init af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.155634455 +0000 UTC m=+0.213827365 container start af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:48:01 compute-0 blissful_hofstadter[259200]: 167 167
Nov 22 05:48:01 compute-0 systemd[1]: libpod-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope: Deactivated successfully.
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.300716645 +0000 UTC m=+0.358909655 container attach af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.30200178 +0000 UTC m=+0.360194700 container died af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce3a9a5995715dd1203225ec8a397d2284c5ec84e4be2597d0c43d80e5769898-merged.mount: Deactivated successfully.
Nov 22 05:48:01 compute-0 podman[259184]: 2025-11-22 05:48:01.502251199 +0000 UTC m=+0.560444099 container remove af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hofstadter, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:48:01 compute-0 systemd[1]: libpod-conmon-af3c540ad0e4b43c7a6f7643ba84a406d0297b49f10241843e62752417ca99a5.scope: Deactivated successfully.
Nov 22 05:48:01 compute-0 ceph-mon[75840]: pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:01 compute-0 podman[259224]: 2025-11-22 05:48:01.686407708 +0000 UTC m=+0.048018509 container create 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:48:01 compute-0 systemd[1]: Started libpod-conmon-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope.
Nov 22 05:48:01 compute-0 podman[259224]: 2025-11-22 05:48:01.666413432 +0000 UTC m=+0.028024273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:48:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:01 compute-0 podman[259224]: 2025-11-22 05:48:01.782755692 +0000 UTC m=+0.144366583 container init 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:48:01 compute-0 podman[259224]: 2025-11-22 05:48:01.793458429 +0000 UTC m=+0.155069230 container start 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:48:01 compute-0 podman[259224]: 2025-11-22 05:48:01.797111117 +0000 UTC m=+0.158721938 container attach 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:48:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:02 compute-0 festive_tesla[259240]: {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     "0": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "devices": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "/dev/loop3"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             ],
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_name": "ceph_lv0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_size": "21470642176",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "name": "ceph_lv0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "tags": {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_name": "ceph",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.crush_device_class": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.encrypted": "0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_id": "0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.vdo": "0"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             },
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "vg_name": "ceph_vg0"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         }
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     ],
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     "1": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "devices": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "/dev/loop4"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             ],
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_name": "ceph_lv1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_size": "21470642176",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "name": "ceph_lv1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "tags": {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_name": "ceph",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.crush_device_class": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.encrypted": "0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_id": "1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.vdo": "0"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             },
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "vg_name": "ceph_vg1"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         }
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     ],
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     "2": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "devices": [
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "/dev/loop5"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             ],
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_name": "ceph_lv2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_size": "21470642176",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "name": "ceph_lv2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "tags": {
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.cluster_name": "ceph",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.crush_device_class": "",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.encrypted": "0",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osd_id": "2",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:                 "ceph.vdo": "0"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             },
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "type": "block",
Nov 22 05:48:02 compute-0 festive_tesla[259240]:             "vg_name": "ceph_vg2"
Nov 22 05:48:02 compute-0 festive_tesla[259240]:         }
Nov 22 05:48:02 compute-0 festive_tesla[259240]:     ]
Nov 22 05:48:02 compute-0 festive_tesla[259240]: }
Nov 22 05:48:02 compute-0 systemd[1]: libpod-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope: Deactivated successfully.
Nov 22 05:48:02 compute-0 conmon[259240]: conmon 9bc5cd46174ef2210c99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope/container/memory.events
Nov 22 05:48:02 compute-0 podman[259224]: 2025-11-22 05:48:02.603548362 +0000 UTC m=+0.965159183 container died 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5ce715227f6b166760c013dede627b677d3df827774e890c31b2b3b8249074f-merged.mount: Deactivated successfully.
Nov 22 05:48:02 compute-0 podman[259224]: 2025-11-22 05:48:02.765442194 +0000 UTC m=+1.127053005 container remove 9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_tesla, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:48:02 compute-0 systemd[1]: libpod-conmon-9bc5cd46174ef2210c995338662fe1cf39fcd63ef70fb6c6d3241980506528a5.scope: Deactivated successfully.
Nov 22 05:48:02 compute-0 sudo[259119]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:02 compute-0 sudo[259263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:48:02 compute-0 sudo[259263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:02 compute-0 sudo[259263]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:02 compute-0 sudo[259288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:48:02 compute-0 sudo[259288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:02 compute-0 sudo[259288]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:03 compute-0 sudo[259313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:48:03 compute-0 sudo[259313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:03 compute-0 sudo[259313]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:03 compute-0 sudo[259338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:48:03 compute-0 sudo[259338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.526442081 +0000 UTC m=+0.041514664 container create 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:48:03 compute-0 systemd[1]: Started libpod-conmon-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope.
Nov 22 05:48:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.506829945 +0000 UTC m=+0.021902538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.604610847 +0000 UTC m=+0.119683450 container init 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.612805497 +0000 UTC m=+0.127878070 container start 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 05:48:03 compute-0 focused_panini[259420]: 167 167
Nov 22 05:48:03 compute-0 systemd[1]: libpod-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope: Deactivated successfully.
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.630942763 +0000 UTC m=+0.146015376 container attach 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.631298663 +0000 UTC m=+0.146371236 container died 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 05:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d2372c86aba2530702ead2654d0bbb55349267c4651fcb294fa20bc5d5aede-merged.mount: Deactivated successfully.
Nov 22 05:48:03 compute-0 ceph-mon[75840]: pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:03 compute-0 podman[259404]: 2025-11-22 05:48:03.717451283 +0000 UTC m=+0.232523856 container remove 30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:48:03 compute-0 systemd[1]: libpod-conmon-30379562945f73419b54ed2869129b577d72e7168b1f1dcfb564566a27196d01.scope: Deactivated successfully.
Nov 22 05:48:03 compute-0 podman[259444]: 2025-11-22 05:48:03.961070936 +0000 UTC m=+0.098659556 container create 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:48:03 compute-0 podman[259444]: 2025-11-22 05:48:03.891130701 +0000 UTC m=+0.028719371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:48:04 compute-0 systemd[1]: Started libpod-conmon-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope.
Nov 22 05:48:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:48:04 compute-0 podman[259444]: 2025-11-22 05:48:04.082592945 +0000 UTC m=+0.220181525 container init 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:48:04 compute-0 podman[259444]: 2025-11-22 05:48:04.088628267 +0000 UTC m=+0.226216847 container start 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:48:04 compute-0 podman[259444]: 2025-11-22 05:48:04.110103262 +0000 UTC m=+0.247691862 container attach 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:48:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]: {
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_id": 1,
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "type": "bluestore"
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     },
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_id": 2,
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "type": "bluestore"
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     },
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_id": 0,
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:         "type": "bluestore"
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]:     }
Nov 22 05:48:05 compute-0 suspicious_maxwell[259461]: }
Nov 22 05:48:05 compute-0 systemd[1]: libpod-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope: Deactivated successfully.
Nov 22 05:48:05 compute-0 podman[259444]: 2025-11-22 05:48:05.079010735 +0000 UTC m=+1.216599325 container died 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 05:48:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e3b96f96f1621fffaf03b1c18a4ca13e75934124f6afd0ed6078bb5c1a17cc-merged.mount: Deactivated successfully.
Nov 22 05:48:05 compute-0 podman[259444]: 2025-11-22 05:48:05.179337675 +0000 UTC m=+1.316926265 container remove 9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 05:48:05 compute-0 systemd[1]: libpod-conmon-9b0f6521acffdca2bdb4d40bfc22aa12c7ff411d9151c968e902dedc8a538a8e.scope: Deactivated successfully.
Nov 22 05:48:05 compute-0 sudo[259338]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:48:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:48:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:48:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:48:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 191fa7df-c4dd-4616-9a28-f8fc520ae19d does not exist
Nov 22 05:48:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev feced4bd-3b03-44a5-8d37-d8c20dbd9ad7 does not exist
Nov 22 05:48:05 compute-0 sudo[259505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:48:05 compute-0 sudo[259505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:05 compute-0 sudo[259505]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:05 compute-0 sudo[259530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:48:05 compute-0 sudo[259530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:48:05 compute-0 sudo[259530]: pam_unix(sudo:session): session closed for user root
Nov 22 05:48:05 compute-0 ceph-mon[75840]: pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:48:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:48:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:07 compute-0 ceph-mon[75840]: pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:09 compute-0 ceph-mon[75840]: pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:10 compute-0 ceph-mon[75840]: pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:11 compute-0 podman[259555]: 2025-11-22 05:48:11.3541215 +0000 UTC m=+0.196167969 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:48:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:13 compute-0 ceph-mon[75840]: pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:15 compute-0 ceph-mon[75840]: pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:16 compute-0 nova_compute[255660]: 2025-11-22 05:48:16.984 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:16 compute-0 nova_compute[255660]: 2025-11-22 05:48:16.985 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:16 compute-0 nova_compute[255660]: 2025-11-22 05:48:16.985 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:48:16 compute-0 nova_compute[255660]: 2025-11-22 05:48:16.986 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.003 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.004 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.032 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.033 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.034 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:48:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:48:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/411300330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.502 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:48:17 compute-0 ceph-mon[75840]: pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/411300330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.652 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.653 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5134MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.654 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.654 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.715 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.715 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:48:17 compute-0 nova_compute[255660]: 2025-11-22 05:48:17.729 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:48:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:48:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199327227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:48:18 compute-0 nova_compute[255660]: 2025-11-22 05:48:18.165 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:48:18 compute-0 nova_compute[255660]: 2025-11-22 05:48:18.174 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:48:18 compute-0 nova_compute[255660]: 2025-11-22 05:48:18.193 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:48:18 compute-0 nova_compute[255660]: 2025-11-22 05:48:18.196 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:48:18 compute-0 nova_compute[255660]: 2025-11-22 05:48:18.197 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:48:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2199327227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:48:19 compute-0 ceph-mon[75840]: pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:21 compute-0 ceph-mon[75840]: pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:22 compute-0 podman[259626]: 2025-11-22 05:48:22.230738739 +0000 UTC m=+0.083724110 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:48:22 compute-0 podman[259627]: 2025-11-22 05:48:22.243343646 +0000 UTC m=+0.096779440 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 22 05:48:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:23 compute-0 ceph-mon[75840]: pgmap v826: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:24 compute-0 sshd-session[259662]: Invalid user trading from 80.94.92.166 port 38686
Nov 22 05:48:24 compute-0 sshd-session[259662]: Connection closed by invalid user trading 80.94.92.166 port 38686 [preauth]
Nov 22 05:48:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:25 compute-0 ceph-mon[75840]: pgmap v827: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:27 compute-0 ceph-mon[75840]: pgmap v828: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:29 compute-0 ceph-mon[75840]: pgmap v829: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:31 compute-0 ceph-mon[75840]: pgmap v830: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:33 compute-0 ceph-mon[75840]: pgmap v831: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:35 compute-0 ceph-mon[75840]: pgmap v832: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:48:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:48:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:48:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:48:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:37 compute-0 ceph-mon[75840]: pgmap v833: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:39 compute-0 ceph-mon[75840]: pgmap v834: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:41 compute-0 ceph-mon[75840]: pgmap v835: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:42 compute-0 podman[259664]: 2025-11-22 05:48:42.286832096 +0000 UTC m=+0.134509289 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:48:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:48:43
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'volumes', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:48:43 compute-0 ceph-mon[75840]: pgmap v836: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:48:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:48:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:45 compute-0 ceph-mon[75840]: pgmap v837: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:48:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:48:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:48:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:48:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:47 compute-0 ceph-mon[75840]: pgmap v838: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:48:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/3859533984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:48:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:48 compute-0 ceph-mon[75840]: pgmap v839: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:51 compute-0 ceph-mon[75840]: pgmap v840: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:48:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:48:53 compute-0 podman[259691]: 2025-11-22 05:48:53.235707171 +0000 UTC m=+0.084303337 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 05:48:53 compute-0 podman[259690]: 2025-11-22 05:48:53.252613843 +0000 UTC m=+0.104642831 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:48:53 compute-0 ceph-mon[75840]: pgmap v841: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:55 compute-0 ceph-mon[75840]: pgmap v842: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:56 compute-0 ceph-mon[75840]: pgmap v843: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:48:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:48:59 compute-0 ceph-mon[75840]: pgmap v844: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:01 compute-0 ceph-mon[75840]: pgmap v845: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:02 compute-0 ceph-mon[75840]: pgmap v846: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:05 compute-0 sudo[259729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:05 compute-0 sudo[259729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:05 compute-0 sudo[259729]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:05 compute-0 sudo[259754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:49:05 compute-0 sudo[259754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:05 compute-0 sudo[259754]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:05 compute-0 sudo[259779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:05 compute-0 sudo[259779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:05 compute-0 sudo[259779]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:05 compute-0 ceph-mon[75840]: pgmap v847: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:05 compute-0 sudo[259804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:49:05 compute-0 sudo[259804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:06 compute-0 sudo[259804]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8cea6c5e-b231-498e-9f64-8288511f3b26 does not exist
Nov 22 05:49:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d7e617b5-befa-4e6b-9d7f-7dbbf7a402a7 does not exist
Nov 22 05:49:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev eaf20724-5698-472e-966d-0be7890a2d58 does not exist
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:49:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:49:06 compute-0 sudo[259861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:06 compute-0 sudo[259861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:06 compute-0 sudo[259861]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:06 compute-0 sudo[259886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:49:06 compute-0 sudo[259886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:06 compute-0 sudo[259886]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:06 compute-0 sudo[259911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:06 compute-0 sudo[259911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:06 compute-0 sudo[259911]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:49:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:49:06 compute-0 sudo[259936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:49:06 compute-0 sudo[259936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.136862079 +0000 UTC m=+0.059770970 container create 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:49:07 compute-0 systemd[1]: Started libpod-conmon-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope.
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.115189669 +0000 UTC m=+0.038098600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.23892912 +0000 UTC m=+0.161838041 container init 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.247377966 +0000 UTC m=+0.170287007 container start 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.251057785 +0000 UTC m=+0.173966716 container attach 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:49:07 compute-0 pensive_lederberg[260020]: 167 167
Nov 22 05:49:07 compute-0 systemd[1]: libpod-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope: Deactivated successfully.
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.255511614 +0000 UTC m=+0.178420495 container died 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:49:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-23103f12f0b6d7e63ad01a432467eb457e4fddfb3147a22a3192167364101779-merged.mount: Deactivated successfully.
Nov 22 05:49:07 compute-0 podman[260003]: 2025-11-22 05:49:07.310444433 +0000 UTC m=+0.233353324 container remove 2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:49:07 compute-0 systemd[1]: libpod-conmon-2c44f8c914a0f2152a55977fb9cd94392d451acd9cd77e0e9684fd553456b95a.scope: Deactivated successfully.
Nov 22 05:49:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:07 compute-0 podman[260044]: 2025-11-22 05:49:07.548637146 +0000 UTC m=+0.071370161 container create de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:49:07 compute-0 systemd[1]: Started libpod-conmon-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope.
Nov 22 05:49:07 compute-0 podman[260044]: 2025-11-22 05:49:07.522398133 +0000 UTC m=+0.045131188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:07 compute-0 podman[260044]: 2025-11-22 05:49:07.655022622 +0000 UTC m=+0.177755677 container init de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:49:07 compute-0 podman[260044]: 2025-11-22 05:49:07.669180761 +0000 UTC m=+0.191913766 container start de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:49:07 compute-0 ceph-mon[75840]: pgmap v848: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:07 compute-0 podman[260044]: 2025-11-22 05:49:07.674508063 +0000 UTC m=+0.197241048 container attach de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:49:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:08 compute-0 xenodochial_hellman[260061]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:49:08 compute-0 xenodochial_hellman[260061]: --> relative data size: 1.0
Nov 22 05:49:08 compute-0 xenodochial_hellman[260061]: --> All data devices are unavailable
Nov 22 05:49:08 compute-0 systemd[1]: libpod-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Deactivated successfully.
Nov 22 05:49:08 compute-0 systemd[1]: libpod-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Consumed 1.013s CPU time.
Nov 22 05:49:08 compute-0 podman[260044]: 2025-11-22 05:49:08.738183451 +0000 UTC m=+1.260916436 container died de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4121c474708a1af9984c1089f4da4eb451254dd7f6ea72db02740570b85a140b-merged.mount: Deactivated successfully.
Nov 22 05:49:08 compute-0 podman[260044]: 2025-11-22 05:49:08.797115947 +0000 UTC m=+1.319848932 container remove de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:49:08 compute-0 systemd[1]: libpod-conmon-de0fa518ac3c4b1cfd8a29542323b1d7aa0804a1c45f434907e4e6ffe4053a6f.scope: Deactivated successfully.
Nov 22 05:49:08 compute-0 sudo[259936]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:08 compute-0 sudo[260101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:08 compute-0 sudo[260101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:08 compute-0 sudo[260101]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:08 compute-0 sudo[260126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:49:09 compute-0 sudo[260126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:09 compute-0 sudo[260126]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:09 compute-0 sudo[260151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:09 compute-0 sudo[260151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:09 compute-0 sudo[260151]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:09 compute-0 sudo[260176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:49:09 compute-0 sudo[260176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.580651069 +0000 UTC m=+0.047898182 container create 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:49:09 compute-0 systemd[1]: Started libpod-conmon-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope.
Nov 22 05:49:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.560786718 +0000 UTC m=+0.028033811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.673224766 +0000 UTC m=+0.140471859 container init 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:49:09 compute-0 ceph-mon[75840]: pgmap v849: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.688836494 +0000 UTC m=+0.156083607 container start 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.693212621 +0000 UTC m=+0.160459744 container attach 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:49:09 compute-0 nifty_ishizaka[260259]: 167 167
Nov 22 05:49:09 compute-0 systemd[1]: libpod-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope: Deactivated successfully.
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.696371035 +0000 UTC m=+0.163618138 container died 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-0052346d5411a7ac5f40ce4c1fa7fede3c315c86a3eb5c459274ae82d5fd37df-merged.mount: Deactivated successfully.
Nov 22 05:49:09 compute-0 podman[260243]: 2025-11-22 05:49:09.737411273 +0000 UTC m=+0.204658346 container remove 30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:49:09 compute-0 systemd[1]: libpod-conmon-30574fc7abe93355b2728ab48e1059a68817d89740efe68d984053f3ba31c2be.scope: Deactivated successfully.
Nov 22 05:49:09 compute-0 podman[260281]: 2025-11-22 05:49:09.992780006 +0000 UTC m=+0.066887451 container create 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:49:10 compute-0 systemd[1]: Started libpod-conmon-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope.
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:09.968777344 +0000 UTC m=+0.042884829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:10.119174707 +0000 UTC m=+0.193282222 container init 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:10.127477149 +0000 UTC m=+0.201584634 container start 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:10.132098583 +0000 UTC m=+0.206206048 container attach 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:49:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:10 compute-0 exciting_darwin[260298]: {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     "0": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "devices": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "/dev/loop3"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             ],
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_name": "ceph_lv0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_size": "21470642176",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "name": "ceph_lv0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "tags": {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_name": "ceph",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.crush_device_class": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.encrypted": "0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_id": "0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.vdo": "0"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             },
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "vg_name": "ceph_vg0"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         }
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     ],
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     "1": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "devices": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "/dev/loop4"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             ],
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_name": "ceph_lv1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_size": "21470642176",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "name": "ceph_lv1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "tags": {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_name": "ceph",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.crush_device_class": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.encrypted": "0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_id": "1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.vdo": "0"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             },
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "vg_name": "ceph_vg1"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         }
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     ],
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     "2": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "devices": [
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "/dev/loop5"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             ],
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_name": "ceph_lv2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_size": "21470642176",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "name": "ceph_lv2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "tags": {
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.cluster_name": "ceph",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.crush_device_class": "",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.encrypted": "0",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osd_id": "2",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:                 "ceph.vdo": "0"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             },
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "type": "block",
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:             "vg_name": "ceph_vg2"
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:         }
Nov 22 05:49:10 compute-0 exciting_darwin[260298]:     ]
Nov 22 05:49:10 compute-0 exciting_darwin[260298]: }
Nov 22 05:49:10 compute-0 systemd[1]: libpod-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope: Deactivated successfully.
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:10.878941564 +0000 UTC m=+0.953049009 container died 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5dd8611a9687fbf055eaf06b1bf2bc4f68dbc58e1fb7cac14c81478458e7ef-merged.mount: Deactivated successfully.
Nov 22 05:49:10 compute-0 podman[260281]: 2025-11-22 05:49:10.941938279 +0000 UTC m=+1.016045764 container remove 693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:49:10 compute-0 systemd[1]: libpod-conmon-693fb1b80bac99433b70aafa556806c18b714262a59b9e6d0f15874d1860c1c0.scope: Deactivated successfully.
Nov 22 05:49:10 compute-0 sudo[260176]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:11 compute-0 sudo[260320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:11 compute-0 sudo[260320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:11 compute-0 sudo[260320]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:11 compute-0 sudo[260345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:49:11 compute-0 sudo[260345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:11 compute-0 sudo[260345]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:11 compute-0 sudo[260370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:11 compute-0 sudo[260370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:11 compute-0 sudo[260370]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:11 compute-0 sudo[260395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:49:11 compute-0 sudo[260395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:11 compute-0 ceph-mon[75840]: pgmap v850: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.709848903 +0000 UTC m=+0.059360768 container create 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:49:11 compute-0 systemd[1]: Started libpod-conmon-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope.
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.682279366 +0000 UTC m=+0.031791301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.804085344 +0000 UTC m=+0.153597289 container init 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.811342269 +0000 UTC m=+0.160854114 container start 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.815355366 +0000 UTC m=+0.164867241 container attach 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:49:11 compute-0 systemd[1]: libpod-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope: Deactivated successfully.
Nov 22 05:49:11 compute-0 gallant_kilby[260478]: 167 167
Nov 22 05:49:11 compute-0 conmon[260478]: conmon 82cec222b6ed94d80e74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope/container/memory.events
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.821204083 +0000 UTC m=+0.170715958 container died 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e758e9e3675e82d313ebdba658959e3f47334fdf0ab23e64a6c543bf882f292a-merged.mount: Deactivated successfully.
Nov 22 05:49:11 compute-0 podman[260462]: 2025-11-22 05:49:11.864049969 +0000 UTC m=+0.213561804 container remove 82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:49:11 compute-0 systemd[1]: libpod-conmon-82cec222b6ed94d80e74de784fcd80be9167c9955b89526b427728b7c62959ad.scope: Deactivated successfully.
Nov 22 05:49:12 compute-0 podman[260502]: 2025-11-22 05:49:12.062299083 +0000 UTC m=+0.049879126 container create f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:49:12 compute-0 systemd[1]: Started libpod-conmon-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope.
Nov 22 05:49:12 compute-0 podman[260502]: 2025-11-22 05:49:12.0401283 +0000 UTC m=+0.027708383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:49:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:49:12 compute-0 podman[260502]: 2025-11-22 05:49:12.169241934 +0000 UTC m=+0.156822037 container init f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:49:12 compute-0 podman[260502]: 2025-11-22 05:49:12.182749015 +0000 UTC m=+0.170329098 container start f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:49:12 compute-0 podman[260502]: 2025-11-22 05:49:12.186795984 +0000 UTC m=+0.174376067 container attach f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:49:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:13 compute-0 sweet_hoover[260518]: {
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_id": 1,
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "type": "bluestore"
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     },
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_id": 2,
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "type": "bluestore"
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     },
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_id": 0,
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:         "type": "bluestore"
Nov 22 05:49:13 compute-0 sweet_hoover[260518]:     }
Nov 22 05:49:13 compute-0 sweet_hoover[260518]: }
Nov 22 05:49:13 compute-0 systemd[1]: libpod-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Deactivated successfully.
Nov 22 05:49:13 compute-0 systemd[1]: libpod-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Consumed 1.038s CPU time.
Nov 22 05:49:13 compute-0 podman[260502]: 2025-11-22 05:49:13.211090987 +0000 UTC m=+1.198671040 container died f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:49:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e31bd070e5e715e1f9a2216f43bed10e1679e611b45434536456515fb57c97-merged.mount: Deactivated successfully.
Nov 22 05:49:13 compute-0 podman[260546]: 2025-11-22 05:49:13.257622892 +0000 UTC m=+0.109264215 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:49:13 compute-0 podman[260502]: 2025-11-22 05:49:13.275699685 +0000 UTC m=+1.263279728 container remove f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:49:13 compute-0 systemd[1]: libpod-conmon-f5cd389d9f8a9109fcc633f2c24df17e5e8d2a853fd9b5a241dfd6462133a054.scope: Deactivated successfully.
Nov 22 05:49:13 compute-0 sudo[260395]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:49:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:49:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.337 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e56ba9be-24c6-4266-9cbd-5c6ac622dde3 does not exist
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 78235278-85ea-4bd0-9cb8-1de7f96089aa does not exist
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.355 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.356 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.376 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.377 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.378 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.378 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.379 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:49:13 compute-0 sudo[260590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:49:13 compute-0 sudo[260590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:13 compute-0 sudo[260590]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:13 compute-0 sudo[260616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:49:13 compute-0 sudo[260616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:49:13 compute-0 sudo[260616]: pam_unix(sudo:session): session closed for user root
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:49:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3877995662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:49:13 compute-0 ceph-mon[75840]: pgmap v851: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:49:13 compute-0 nova_compute[255660]: 2025-11-22 05:49:13.833 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.009 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5129MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.011 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.183 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.184 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.206 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:49:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:49:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29265632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.644 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.651 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.709 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.712 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:49:14 compute-0 nova_compute[255660]: 2025-11-22 05:49:14.713 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:49:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3877995662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:49:14 compute-0 ceph-mon[75840]: pgmap v852: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/29265632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.486 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.487 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.487 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.488 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.488 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:15 compute-0 nova_compute[255660]: 2025-11-22 05:49:15.489 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:49:16 compute-0 nova_compute[255660]: 2025-11-22 05:49:16.127 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:16 compute-0 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:49:16 compute-0 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:49:16 compute-0 nova_compute[255660]: 2025-11-22 05:49:16.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:49:16 compute-0 nova_compute[255660]: 2025-11-22 05:49:16.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:49:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:17 compute-0 ceph-mon[75840]: pgmap v853: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:19 compute-0 ceph-mon[75840]: pgmap v854: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:21 compute-0 ceph-mon[75840]: pgmap v855: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:23 compute-0 ceph-mon[75840]: pgmap v856: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:24 compute-0 podman[260684]: 2025-11-22 05:49:24.211803626 +0000 UTC m=+0.066284164 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:49:24 compute-0 podman[260685]: 2025-11-22 05:49:24.251344563 +0000 UTC m=+0.097505319 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 05:49:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:24 compute-0 ceph-mon[75840]: pgmap v857: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:27 compute-0 ceph-mon[75840]: pgmap v858: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:29 compute-0 ceph-mon[75840]: pgmap v859: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:31 compute-0 ceph-mon[75840]: pgmap v860: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:32 compute-0 ceph-mon[75840]: pgmap v861: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:35 compute-0 ceph-mon[75840]: pgmap v862: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:49:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:49:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:49:36.929 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:49:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:37 compute-0 ceph-mon[75840]: pgmap v863: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:39 compute-0 ceph-mon[75840]: pgmap v864: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:41 compute-0 ceph-mon[75840]: pgmap v865: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:43 compute-0 ceph-mon[75840]: pgmap v866: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:49:43
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:49:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:49:44 compute-0 podman[260723]: 2025-11-22 05:49:44.265613505 +0000 UTC m=+0.122881019 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 05:49:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:45 compute-0 ceph-mon[75840]: pgmap v867: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:47 compute-0 ceph-mon[75840]: pgmap v868: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:49:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:49:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:49:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:49:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:49:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1641952068' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:49:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:49 compute-0 ceph-mon[75840]: pgmap v869: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:51 compute-0 ceph-mon[75840]: pgmap v870: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:49:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:49:53 compute-0 ceph-mon[75840]: pgmap v871: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:55 compute-0 podman[260751]: 2025-11-22 05:49:55.211165159 +0000 UTC m=+0.062287587 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:49:55 compute-0 podman[260752]: 2025-11-22 05:49:55.242371355 +0000 UTC m=+0.080138156 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 22 05:49:55 compute-0 ceph-mon[75840]: pgmap v872: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:49:57 compute-0 ceph-mon[75840]: pgmap v873: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.776003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598776039, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 3459539, "memory_usage": 3516448, "flush_reason": "Manual Compaction"}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598815608, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3394645, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16347, "largest_seqno": 18395, "table_properties": {"data_size": 3385323, "index_size": 5880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18399, "raw_average_key_size": 19, "raw_value_size": 3366834, "raw_average_value_size": 3628, "num_data_blocks": 266, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790368, "oldest_key_time": 1763790368, "file_creation_time": 1763790598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 39702 microseconds, and 13770 cpu microseconds.
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.815698) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3394645 bytes OK
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.815729) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830504) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830566) EVENT_LOG_v1 {"time_micros": 1763790598830555, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.830592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3450969, prev total WAL file size 3450969, number of live WAL files 2.
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.832113) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3315KB)], [38(7503KB)]
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598832228, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11078353, "oldest_snapshot_seqno": -1}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4417 keys, 9313095 bytes, temperature: kUnknown
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598917226, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9313095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9279857, "index_size": 21096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106807, "raw_average_key_size": 24, "raw_value_size": 9196351, "raw_average_value_size": 2082, "num_data_blocks": 895, "num_entries": 4417, "num_filter_entries": 4417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790598, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.917560) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9313095 bytes
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.919711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.2 rd, 109.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4931, records dropped: 514 output_compression: NoCompression
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.919762) EVENT_LOG_v1 {"time_micros": 1763790598919746, "job": 18, "event": "compaction_finished", "compaction_time_micros": 85073, "compaction_time_cpu_micros": 36793, "output_level": 6, "num_output_files": 1, "total_output_size": 9313095, "num_input_records": 4931, "num_output_records": 4417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598921013, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790598923717, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.831965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:58 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:49:58.923843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:49:59 compute-0 ceph-mon[75840]: pgmap v874: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:01 compute-0 ceph-mon[75840]: pgmap v875: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:02 compute-0 ceph-mon[75840]: pgmap v876: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:05 compute-0 ceph-mon[75840]: pgmap v877: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:07 compute-0 ceph-mon[75840]: pgmap v878: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:09 compute-0 ceph-mon[75840]: pgmap v879: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.166 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:50:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:50:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/591693286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.617 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:50:11 compute-0 ceph-mon[75840]: pgmap v880: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/591693286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.802 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.804 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.805 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.805 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.898 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.899 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:50:11 compute-0 nova_compute[255660]: 2025-11-22 05:50:11.921 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:50:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:50:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242325174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.389 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.397 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.415 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.417 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.418 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.419 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.419 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.435 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.437 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.437 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 05:50:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:12 compute-0 nova_compute[255660]: 2025-11-22 05:50:12.448 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:12 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1242325174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:50:13 compute-0 nova_compute[255660]: 2025-11-22 05:50:13.459 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:13 compute-0 sudo[260833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:13 compute-0 sudo[260833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:13 compute-0 sudo[260833]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:13 compute-0 sudo[260858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:50:13 compute-0 sudo[260858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:13 compute-0 sudo[260858]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:13 compute-0 ceph-mon[75840]: pgmap v881: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:13 compute-0 sudo[260883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:13 compute-0 sudo[260883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:13 compute-0 sudo[260883]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:13 compute-0 sudo[260908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:50:13 compute-0 sudo[260908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:14 compute-0 sudo[260908]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ae4ab058-ec59-4925-b3aa-49f1eb823cd9 does not exist
Nov 22 05:50:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a3a29f18-4d70-42db-8341-e81d57f081d6 does not exist
Nov 22 05:50:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4a39b3e7-d555-4897-8e4d-da4e8525759c does not exist
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:50:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:50:14 compute-0 sudo[260965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:14 compute-0 sudo[260965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:14 compute-0 sudo[260965]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:14 compute-0 sudo[260996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:50:14 compute-0 sudo[260996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:14 compute-0 sudo[260996]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:14 compute-0 podman[260989]: 2025-11-22 05:50:14.752581955 +0000 UTC m=+0.132690881 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:50:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:50:14 compute-0 sudo[261036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:14 compute-0 sudo[261036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:14 compute-0 sudo[261036]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:14 compute-0 sudo[261065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:50:14 compute-0 sudo[261065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:15 compute-0 nova_compute[255660]: 2025-11-22 05:50:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:15 compute-0 nova_compute[255660]: 2025-11-22 05:50:15.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.357778946 +0000 UTC m=+0.066193411 container create f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 05:50:15 compute-0 systemd[1]: Started libpod-conmon-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope.
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.33437648 +0000 UTC m=+0.042790955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.459696883 +0000 UTC m=+0.168111408 container init f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.471099228 +0000 UTC m=+0.179513693 container start f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.47526887 +0000 UTC m=+0.183683385 container attach f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:50:15 compute-0 interesting_rosalind[261145]: 167 167
Nov 22 05:50:15 compute-0 systemd[1]: libpod-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope: Deactivated successfully.
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.480083409 +0000 UTC m=+0.188497874 container died f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:50:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-81a657a6f936d58d50111a864d8e613446753d4324b3dccea14314c14a55c7c0-merged.mount: Deactivated successfully.
Nov 22 05:50:15 compute-0 podman[261129]: 2025-11-22 05:50:15.544239035 +0000 UTC m=+0.252653510 container remove f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_rosalind, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:50:15 compute-0 systemd[1]: libpod-conmon-f1bd0bf17f943fabe897b372548c38836a043a45244c1713aae78afd88e9987f.scope: Deactivated successfully.
Nov 22 05:50:15 compute-0 ceph-mon[75840]: pgmap v882: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:15 compute-0 podman[261168]: 2025-11-22 05:50:15.783316401 +0000 UTC m=+0.060193981 container create a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:50:15 compute-0 systemd[1]: Started libpod-conmon-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope.
Nov 22 05:50:15 compute-0 podman[261168]: 2025-11-22 05:50:15.754068899 +0000 UTC m=+0.030946479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:15 compute-0 podman[261168]: 2025-11-22 05:50:15.892705378 +0000 UTC m=+0.169582988 container init a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:50:15 compute-0 podman[261168]: 2025-11-22 05:50:15.906943198 +0000 UTC m=+0.183820778 container start a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:50:15 compute-0 podman[261168]: 2025-11-22 05:50:15.910758421 +0000 UTC m=+0.187636051 container attach a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:16 compute-0 nova_compute[255660]: 2025-11-22 05:50:16.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:50:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:17 compute-0 trusting_kilby[261184]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:50:17 compute-0 trusting_kilby[261184]: --> relative data size: 1.0
Nov 22 05:50:17 compute-0 trusting_kilby[261184]: --> All data devices are unavailable
Nov 22 05:50:17 compute-0 systemd[1]: libpod-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Deactivated successfully.
Nov 22 05:50:17 compute-0 podman[261168]: 2025-11-22 05:50:17.096704449 +0000 UTC m=+1.373582019 container died a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:50:17 compute-0 systemd[1]: libpod-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Consumed 1.154s CPU time.
Nov 22 05:50:17 compute-0 nova_compute[255660]: 2025-11-22 05:50:17.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:17 compute-0 nova_compute[255660]: 2025-11-22 05:50:17.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-11c1cb11d710995098df44ce3dea41073b2f46f1ce3df197755d8a4a4df03d8b-merged.mount: Deactivated successfully.
Nov 22 05:50:17 compute-0 podman[261168]: 2025-11-22 05:50:17.180707956 +0000 UTC m=+1.457585526 container remove a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:50:17 compute-0 systemd[1]: libpod-conmon-a356fff79c9289c96bf93349108eb22617e773dd9c8c11aefd1c704dc0571d2d.scope: Deactivated successfully.
Nov 22 05:50:17 compute-0 sudo[261065]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:17 compute-0 sudo[261225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:17 compute-0 sudo[261225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:17 compute-0 sudo[261225]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:17 compute-0 sudo[261250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:50:17 compute-0 sudo[261250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:17 compute-0 sudo[261250]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:17 compute-0 sudo[261275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:17 compute-0 sudo[261275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:17 compute-0 sudo[261275]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:17 compute-0 sudo[261300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:50:17 compute-0 sudo[261300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:17 compute-0 ceph-mon[75840]: pgmap v883: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:17 compute-0 podman[261366]: 2025-11-22 05:50:17.999174424 +0000 UTC m=+0.048004936 container create 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:50:18 compute-0 systemd[1]: Started libpod-conmon-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope.
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:17.972850309 +0000 UTC m=+0.021680911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:18.093128922 +0000 UTC m=+0.141959454 container init 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:18.103655374 +0000 UTC m=+0.152485886 container start 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:50:18 compute-0 systemd[1]: libpod-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope: Deactivated successfully.
Nov 22 05:50:18 compute-0 stupefied_elion[261383]: 167 167
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:18.109572464 +0000 UTC m=+0.158402986 container attach 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:50:18 compute-0 conmon[261383]: conmon 3c0a7c337e2763425150 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope/container/memory.events
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:18.110231311 +0000 UTC m=+0.159061833 container died 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a61d16dbe76589d24fc3c642b0ccfd59c37222096a6dc19a6352a5982c35c9db-merged.mount: Deactivated successfully.
Nov 22 05:50:18 compute-0 podman[261366]: 2025-11-22 05:50:18.151316253 +0000 UTC m=+0.200146765 container remove 3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:50:18 compute-0 systemd[1]: libpod-conmon-3c0a7c337e27634251508dacc8b0d15d62e4aac4801ec5ef49ed9c1f139d663d.scope: Deactivated successfully.
Nov 22 05:50:18 compute-0 podman[261408]: 2025-11-22 05:50:18.309137718 +0000 UTC m=+0.036543442 container create 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 05:50:18 compute-0 systemd[1]: Started libpod-conmon-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope.
Nov 22 05:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:18 compute-0 podman[261408]: 2025-11-22 05:50:18.37033755 +0000 UTC m=+0.097743314 container init 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:50:18 compute-0 podman[261408]: 2025-11-22 05:50:18.377518963 +0000 UTC m=+0.104924687 container start 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:50:18 compute-0 podman[261408]: 2025-11-22 05:50:18.381169981 +0000 UTC m=+0.108575785 container attach 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:50:18 compute-0 podman[261408]: 2025-11-22 05:50:18.292413259 +0000 UTC m=+0.019819003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]: {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     "0": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "devices": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "/dev/loop3"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             ],
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_name": "ceph_lv0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_size": "21470642176",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "name": "ceph_lv0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "tags": {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_name": "ceph",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.crush_device_class": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.encrypted": "0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_id": "0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.vdo": "0"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             },
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "vg_name": "ceph_vg0"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         }
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     ],
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     "1": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "devices": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "/dev/loop4"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             ],
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_name": "ceph_lv1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_size": "21470642176",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "name": "ceph_lv1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "tags": {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_name": "ceph",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.crush_device_class": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.encrypted": "0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_id": "1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.vdo": "0"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             },
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "vg_name": "ceph_vg1"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         }
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     ],
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     "2": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "devices": [
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "/dev/loop5"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             ],
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_name": "ceph_lv2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_size": "21470642176",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "name": "ceph_lv2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "tags": {
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.cluster_name": "ceph",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.crush_device_class": "",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.encrypted": "0",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osd_id": "2",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:                 "ceph.vdo": "0"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             },
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "type": "block",
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:             "vg_name": "ceph_vg2"
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:         }
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]:     ]
Nov 22 05:50:19 compute-0 inspiring_dijkstra[261424]: }
Nov 22 05:50:19 compute-0 systemd[1]: libpod-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope: Deactivated successfully.
Nov 22 05:50:19 compute-0 podman[261408]: 2025-11-22 05:50:19.111604379 +0000 UTC m=+0.839010143 container died 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7435007803750e417f3078dad04d1a25d2683db724278cdbc1886e8ad587816-merged.mount: Deactivated successfully.
Nov 22 05:50:19 compute-0 podman[261408]: 2025-11-22 05:50:19.201909512 +0000 UTC m=+0.929315236 container remove 313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 05:50:19 compute-0 systemd[1]: libpod-conmon-313476adce7d0c51199d15d1763bfef1172e6ddb748e6cec6314be4521a16356.scope: Deactivated successfully.
Nov 22 05:50:19 compute-0 sudo[261300]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:19 compute-0 sudo[261448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:19 compute-0 sudo[261448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:19 compute-0 sudo[261448]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:19 compute-0 sudo[261473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:50:19 compute-0 sudo[261473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:19 compute-0 sudo[261473]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:19 compute-0 sudo[261498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:19 compute-0 sudo[261498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:19 compute-0 sudo[261498]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:19 compute-0 sudo[261523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:50:19 compute-0 sudo[261523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:19 compute-0 ceph-mon[75840]: pgmap v884: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:19 compute-0 podman[261589]: 2025-11-22 05:50:19.940200051 +0000 UTC m=+0.058317556 container create 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:50:19 compute-0 systemd[1]: Started libpod-conmon-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope.
Nov 22 05:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:19.911275464 +0000 UTC m=+0.029393019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:20.021289987 +0000 UTC m=+0.139407562 container init 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:20.028144441 +0000 UTC m=+0.146261956 container start 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:20.032035805 +0000 UTC m=+0.150153290 container attach 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:50:20 compute-0 zealous_montalcini[261605]: 167 167
Nov 22 05:50:20 compute-0 systemd[1]: libpod-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope: Deactivated successfully.
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:20.033461434 +0000 UTC m=+0.151578949 container died 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef0fec6b5597638514c2ec3f48b8437661cf249e4b7c2f6f2ff8add03cafb2c7-merged.mount: Deactivated successfully.
Nov 22 05:50:20 compute-0 podman[261589]: 2025-11-22 05:50:20.082595132 +0000 UTC m=+0.200712647 container remove 138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:50:20 compute-0 systemd[1]: libpod-conmon-138de07e223847c18ead7a6ca0e01729abc9cbfbe34c08e5b6fe1cfa52cd016b.scope: Deactivated successfully.
Nov 22 05:50:20 compute-0 podman[261627]: 2025-11-22 05:50:20.26105679 +0000 UTC m=+0.057077083 container create 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:50:20 compute-0 systemd[1]: Started libpod-conmon-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope.
Nov 22 05:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:50:20 compute-0 podman[261627]: 2025-11-22 05:50:20.241836544 +0000 UTC m=+0.037856877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:50:20 compute-0 podman[261627]: 2025-11-22 05:50:20.353460539 +0000 UTC m=+0.149480912 container init 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:50:20 compute-0 podman[261627]: 2025-11-22 05:50:20.366227451 +0000 UTC m=+0.162247764 container start 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:50:20 compute-0 podman[261627]: 2025-11-22 05:50:20.370391493 +0000 UTC m=+0.166411806 container attach 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:50:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:20 compute-0 ceph-mon[75840]: pgmap v885: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:21 compute-0 wizardly_colden[261643]: {
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_id": 1,
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "type": "bluestore"
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     },
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_id": 2,
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "type": "bluestore"
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     },
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_id": 0,
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:         "type": "bluestore"
Nov 22 05:50:21 compute-0 wizardly_colden[261643]:     }
Nov 22 05:50:21 compute-0 wizardly_colden[261643]: }
Nov 22 05:50:21 compute-0 systemd[1]: libpod-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Deactivated successfully.
Nov 22 05:50:21 compute-0 systemd[1]: libpod-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Consumed 1.177s CPU time.
Nov 22 05:50:21 compute-0 podman[261627]: 2025-11-22 05:50:21.534108957 +0000 UTC m=+1.330129270 container died 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-8abe56809797e49bd65430ded8641ba4a7ef34d5dd5e8b9482f5ea67d398083a-merged.mount: Deactivated successfully.
Nov 22 05:50:21 compute-0 podman[261627]: 2025-11-22 05:50:21.597245111 +0000 UTC m=+1.393265434 container remove 14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 05:50:21 compute-0 systemd[1]: libpod-conmon-14783086e9d0a20b2f23f79d5b9be79779bc82ceb8087f2851e9c0d950192fb3.scope: Deactivated successfully.
Nov 22 05:50:21 compute-0 sudo[261523]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:50:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:50:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:21 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 713fe6e3-0764-4749-8b31-032ee2b65400 does not exist
Nov 22 05:50:21 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 321e67d5-0623-4a7c-82f2-e27924958e2d does not exist
Nov 22 05:50:21 compute-0 sudo[261690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:50:21 compute-0 sudo[261690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:21 compute-0 sudo[261690]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:21 compute-0 sudo[261715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:50:21 compute-0 sudo[261715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:50:21 compute-0 sudo[261715]: pam_unix(sudo:session): session closed for user root
Nov 22 05:50:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 22 05:50:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 22 05:50:21 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 22 05:50:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.449010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622449053, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 251, "total_data_size": 390795, "memory_usage": 399192, "flush_reason": "Manual Compaction"}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622453815, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 321399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18396, "largest_seqno": 18865, "table_properties": {"data_size": 318799, "index_size": 636, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6816, "raw_average_key_size": 19, "raw_value_size": 313439, "raw_average_value_size": 913, "num_data_blocks": 28, "num_entries": 343, "num_filter_entries": 343, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790599, "oldest_key_time": 1763790599, "file_creation_time": 1763790622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4884 microseconds, and 2532 cpu microseconds.
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.453890) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 321399 bytes OK
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.453921) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455575) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455603) EVENT_LOG_v1 {"time_micros": 1763790622455593, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.455629) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 387986, prev total WAL file size 387986, number of live WAL files 2.
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.456322) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(313KB)], [41(9094KB)]
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622456368, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9634494, "oldest_snapshot_seqno": -1}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4247 keys, 6387801 bytes, temperature: kUnknown
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622513142, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6387801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6360093, "index_size": 16015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103679, "raw_average_key_size": 24, "raw_value_size": 6283886, "raw_average_value_size": 1479, "num_data_blocks": 675, "num_entries": 4247, "num_filter_entries": 4247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.513438) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6387801 bytes
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.514973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 112.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(49.9) write-amplify(19.9) OK, records in: 4760, records dropped: 513 output_compression: NoCompression
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.515050) EVENT_LOG_v1 {"time_micros": 1763790622514993, "job": 20, "event": "compaction_finished", "compaction_time_micros": 56865, "compaction_time_cpu_micros": 33510, "output_level": 6, "num_output_files": 1, "total_output_size": 6387801, "num_input_records": 4760, "num_output_records": 4247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622515360, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790622518830, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.456214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.518992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:50:22.519011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:50:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 22 05:50:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:50:22 compute-0 ceph-mon[75840]: osdmap e128: 3 total, 3 up, 3 in
Nov 22 05:50:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 22 05:50:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 22 05:50:22 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 22 05:50:23 compute-0 ceph-mon[75840]: pgmap v887: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 22 05:50:23 compute-0 ceph-mon[75840]: osdmap e129: 3 total, 3 up, 3 in
Nov 22 05:50:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 22 05:50:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 22 05:50:24 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 22 05:50:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Nov 22 05:50:25 compute-0 ceph-mon[75840]: osdmap e130: 3 total, 3 up, 3 in
Nov 22 05:50:25 compute-0 ceph-mon[75840]: pgmap v890: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s wr, 0 op/s
Nov 22 05:50:26 compute-0 podman[261742]: 2025-11-22 05:50:26.265736583 +0000 UTC m=+0.099635195 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 05:50:26 compute-0 podman[261741]: 2025-11-22 05:50:26.280766246 +0000 UTC m=+0.114832063 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:50:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 22 05:50:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 22 05:50:26 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 22 05:50:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v892: 321 pgs: 321 active+clean; 16 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 MiB/s wr, 49 op/s
Nov 22 05:50:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:27 compute-0 ceph-mon[75840]: osdmap e131: 3 total, 3 up, 3 in
Nov 22 05:50:27 compute-0 ceph-mon[75840]: pgmap v892: 321 pgs: 321 active+clean; 16 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 MiB/s wr, 49 op/s
Nov 22 05:50:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v893: 321 pgs: 321 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.1 MiB/s wr, 42 op/s
Nov 22 05:50:29 compute-0 ceph-mon[75840]: pgmap v893: 321 pgs: 321 active+clean; 37 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.1 MiB/s wr, 42 op/s
Nov 22 05:50:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v894: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 22 05:50:31 compute-0 ceph-mon[75840]: pgmap v894: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Nov 22 05:50:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 22 05:50:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 22 05:50:32 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 22 05:50:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v896: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 05:50:33 compute-0 ceph-mon[75840]: osdmap e132: 3 total, 3 up, 3 in
Nov 22 05:50:33 compute-0 ceph-mon[75840]: pgmap v896: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 22 05:50:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v897: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Nov 22 05:50:36 compute-0 ceph-mon[75840]: pgmap v897: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Nov 22 05:50:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v898: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.5 MiB/s wr, 14 op/s
Nov 22 05:50:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.930 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:50:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.931 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:50:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:36.931 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:50:37 compute-0 ceph-mon[75840]: pgmap v898: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.5 MiB/s wr, 14 op/s
Nov 22 05:50:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v899: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 455 KiB/s wr, 12 op/s
Nov 22 05:50:39 compute-0 ceph-mon[75840]: pgmap v899: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 455 KiB/s wr, 12 op/s
Nov 22 05:50:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v900: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 22 05:50:41 compute-0 ceph-mon[75840]: pgmap v900: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 22 05:50:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v901: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:43 compute-0 ceph-mon[75840]: pgmap v901: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:50:43
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'vms', 'images']
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:50:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:50:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v902: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:45 compute-0 podman[261783]: 2025-11-22 05:50:45.2932025 +0000 UTC m=+0.141055865 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 05:50:45 compute-0 ceph-mon[75840]: pgmap v902: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:45 compute-0 sshd-session[261781]: Invalid user trader from 80.94.92.166 port 41290
Nov 22 05:50:46 compute-0 sshd-session[261781]: Connection closed by invalid user trader 80.94.92.166 port 41290 [preauth]
Nov 22 05:50:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v903: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:50:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:50:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:50:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:50:47 compute-0 ceph-mon[75840]: pgmap v903: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:50:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/340953056' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:50:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v904: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:49 compute-0 ceph-mon[75840]: pgmap v904: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:50 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:50.418 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:50:50 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:50.419 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:50.537+0000 7f5339360640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v905: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "format": "json"}]: dispatch
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:50:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:50:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:50:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:51 compute-0 ceph-mon[75840]: pgmap v905: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 05:50:51 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v906: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 05:50:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "format": "json"}]: dispatch
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.266792016669923e-07 of space, bias 4.0, pg target 0.0009920150420003907 quantized to 16 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:50:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:50:52 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mscchl(active, since 26m)
Nov 22 05:50:53 compute-0 ceph-mon[75840]: pgmap v906: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 05:50:53 compute-0 ceph-mon[75840]: mgrmap e10: compute-0.mscchl(active, since 26m)
Nov 22 05:50:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:50:54 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v907: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mon[75840]: pgmap v907: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s wr, 0 op/s
Nov 22 05:50:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta.tmp'
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta.tmp' to config b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489/.meta'
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:50:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta.tmp'
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta.tmp' to config b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc/.meta'
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:50:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:50:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v908: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 2 op/s
Nov 22 05:50:57 compute-0 podman[261824]: 2025-11-22 05:50:57.241196156 +0000 UTC m=+0.096828279 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 22 05:50:57 compute-0 ceph-mon[75840]: pgmap v908: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s wr, 2 op/s
Nov 22 05:50:57 compute-0 podman[261825]: 2025-11-22 05:50:57.247443123 +0000 UTC m=+0.097220559 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 22 05:50:57 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:50:57.421 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:50:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:50:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "format": "json"}]: dispatch
Nov 22 05:50:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:50:58 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:50:58 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:58 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v909: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6b1d082-aa60-414d-aa02-6f616d2261dc' of type subvolume
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.114+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6b1d082-aa60-414d-aa02-6f616d2261dc' of type subvolume
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6b1d082-aa60-414d-aa02-6f616d2261dc'' moved to trashcan
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6b1d082-aa60-414d-aa02-6f616d2261dc, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.132+0000 7f533c366640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.163+0000 7f533b364640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d695e4c-80d1-4558-8e35-fa4463b56489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d695e4c-80d1-4558-8e35-fa4463b56489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d695e4c-80d1-4558-8e35-fa4463b56489' of type subvolume
Nov 22 05:50:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:50:59.337+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d695e4c-80d1-4558-8e35-fa4463b56489' of type subvolume
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "force": true, "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d695e4c-80d1-4558-8e35-fa4463b56489'' moved to trashcan
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:50:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d695e4c-80d1-4558-8e35-fa4463b56489, vol_name:cephfs) < ""
Nov 22 05:50:59 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:50:59 compute-0 ceph-mon[75840]: pgmap v909: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 05:51:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v910: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 05:51:00 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mscchl(active, since 26m)
Nov 22 05:51:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "format": "json"}]: dispatch
Nov 22 05:51:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6b1d082-aa60-414d-aa02-6f616d2261dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "format": "json"}]: dispatch
Nov 22 05:51:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d695e4c-80d1-4558-8e35-fa4463b56489", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:01 compute-0 ceph-mon[75840]: pgmap v910: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 2 op/s
Nov 22 05:51:01 compute-0 ceph-mon[75840]: mgrmap e11: compute-0.mscchl(active, since 26m)
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v911: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 4 op/s
Nov 22 05:51:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp'
Nov 22 05:51:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta.tmp' to config b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759/.meta'
Nov 22 05:51:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1aa7681a-db1a-45b0-a136-7ab46880c277, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:03 compute-0 ceph-mon[75840]: pgmap v911: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 15 KiB/s wr, 4 op/s
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta.tmp'
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta.tmp' to config b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c/.meta'
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:04 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v912: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 4 op/s
Nov 22 05:51:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277_9dfa855e-1208-478c-a1ee-6451e9ea868d", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "snap_name": "1aa7681a-db1a-45b0-a136-7ab46880c277", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 05:51:05 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:05 compute-0 ceph-mon[75840]: pgmap v912: 321 pgs: 321 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 4 op/s
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:06 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:06.552+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15202c81-eb7f-4a9b-b839-74d8d3eac759' of type subvolume
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '15202c81-eb7f-4a9b-b839-74d8d3eac759' of type subvolume
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/15202c81-eb7f-4a9b-b839-74d8d3eac759'' moved to trashcan
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:15202c81-eb7f-4a9b-b839-74d8d3eac759, vol_name:cephfs) < ""
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v913: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 5 op/s
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta.tmp'
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta.tmp' to config b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93/.meta'
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 22 05:51:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 22 05:51:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 22 05:51:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15202c81-eb7f-4a9b-b839-74d8d3eac759", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mon[75840]: pgmap v913: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 20 KiB/s wr, 5 op/s
Nov 22 05:51:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:51:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v915: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 05:51:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 05:51:08 compute-0 ceph-mon[75840]: osdmap e133: 3 total, 3 up, 3 in
Nov 22 05:51:09 compute-0 ceph-mon[75840]: pgmap v915: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d9b34a10-7e37-4811-ad95-28431845630c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d9b34a10-7e37-4811-ad95-28431845630c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:10 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:10.307+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd9b34a10-7e37-4811-ad95-28431845630c' of type subvolume
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd9b34a10-7e37-4811-ad95-28431845630c' of type subvolume
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d9b34a10-7e37-4811-ad95-28431845630c'' moved to trashcan
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d9b34a10-7e37-4811-ad95-28431845630c, vol_name:cephfs) < ""
Nov 22 05:51:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v916: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 05:51:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d9b34a10-7e37-4811-ad95-28431845630c", "format": "json"}]: dispatch
Nov 22 05:51:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d9b34a10-7e37-4811-ad95-28431845630c", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:10 compute-0 ceph-mon[75840]: pgmap v916: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 17 KiB/s wr, 5 op/s
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta.tmp'
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta.tmp' to config b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378/.meta'
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 05:51:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v917: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.754796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672755384, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 762, "num_deletes": 258, "total_data_size": 1037078, "memory_usage": 1052384, "flush_reason": "Manual Compaction"}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672774146, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1028635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18866, "largest_seqno": 19627, "table_properties": {"data_size": 1024615, "index_size": 1736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8945, "raw_average_key_size": 18, "raw_value_size": 1016355, "raw_average_value_size": 2130, "num_data_blocks": 79, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790622, "oldest_key_time": 1763790622, "file_creation_time": 1763790672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 19067 microseconds, and 7549 cpu microseconds.
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.774230) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1028635 bytes OK
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.774266) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783919) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783936) EVENT_LOG_v1 {"time_micros": 1763790672783929, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.783961) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1033058, prev total WAL file size 1033058, number of live WAL files 2.
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.784658) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1004KB)], [44(6238KB)]
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672784744, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7416436, "oldest_snapshot_seqno": -1}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4194 keys, 7292195 bytes, temperature: kUnknown
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672908808, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7292195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7263338, "index_size": 17291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103953, "raw_average_key_size": 24, "raw_value_size": 7186509, "raw_average_value_size": 1713, "num_data_blocks": 725, "num_entries": 4194, "num_filter_entries": 4194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.909146) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7292195 bytes
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.960679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.7 rd, 58.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.1 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(14.3) write-amplify(7.1) OK, records in: 4724, records dropped: 530 output_compression: NoCompression
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.960744) EVENT_LOG_v1 {"time_micros": 1763790672960720, "job": 22, "event": "compaction_finished", "compaction_time_micros": 124154, "compaction_time_cpu_micros": 32164, "output_level": 6, "num_output_files": 1, "total_output_size": 7292195, "num_input_records": 4724, "num_output_records": 4194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672961260, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790672963621, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.784502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:12 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:51:12.963686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.170 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.329 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.330 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.330 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.331 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.332 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:51:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827245382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:51:13 compute-0 nova_compute[255660]: 2025-11-22 05:51:13.831 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:51:13 compute-0 ceph-mon[75840]: pgmap v917: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.047 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.049 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.050 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.051 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta.tmp'
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta.tmp' to config b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5/.meta'
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.405 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.406 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.647 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 05:51:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v918: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.743 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.744 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.763 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.795 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 05:51:14 compute-0 nova_compute[255660]: 2025-11-22 05:51:14.814 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:51:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1827245382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:14 compute-0 ceph-mon[75840]: pgmap v918: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f30f85a5-1564-4626-84fb-0c570e11fc93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f30f85a5-1564-4626-84fb-0c570e11fc93, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:15.179+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30f85a5-1564-4626-84fb-0c570e11fc93' of type subvolume
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30f85a5-1564-4626-84fb-0c570e11fc93' of type subvolume
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f30f85a5-1564-4626-84fb-0c570e11fc93'' moved to trashcan
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30f85a5-1564-4626-84fb-0c570e11fc93, vol_name:cephfs) < ""
Nov 22 05:51:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:51:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3703170812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:51:15 compute-0 nova_compute[255660]: 2025-11-22 05:51:15.337 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:51:15 compute-0 nova_compute[255660]: 2025-11-22 05:51:15.345 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:51:15 compute-0 nova_compute[255660]: 2025-11-22 05:51:15.366 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:51:15 compute-0 nova_compute[255660]: 2025-11-22 05:51:15.368 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:51:15 compute-0 nova_compute[255660]: 2025-11-22 05:51:15.369 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:51:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "format": "json"}]: dispatch
Nov 22 05:51:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f30f85a5-1564-4626-84fb-0c570e11fc93", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3703170812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:51:16 compute-0 podman[261933]: 2025-11-22 05:51:16.283720716 +0000 UTC m=+0.136174074 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 05:51:16 compute-0 nova_compute[255660]: 2025-11-22 05:51:16.328 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:16 compute-0 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:16 compute-0 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:16 compute-0 nova_compute[255660]: 2025-11-22 05:51:16.329 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:51:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v919: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:16 compute-0 ceph-mon[75840]: pgmap v919: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:17 compute-0 nova_compute[255660]: 2025-11-22 05:51:17.126 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:17 compute-0 nova_compute[255660]: 2025-11-22 05:51:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:17 compute-0 nova_compute[255660]: 2025-11-22 05:51:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 22 05:51:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 22 05:51:17 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2cac985c-91b9-4f35-a81a-295d69c728b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2cac985c-91b9-4f35-a81a-295d69c728b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:17 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:17.955+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cac985c-91b9-4f35-a81a-295d69c728b5' of type subvolume
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2cac985c-91b9-4f35-a81a-295d69c728b5' of type subvolume
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2cac985c-91b9-4f35-a81a-295d69c728b5'' moved to trashcan
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2cac985c-91b9-4f35-a81a-295d69c728b5, vol_name:cephfs) < ""
Nov 22 05:51:18 compute-0 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:18 compute-0 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:51:18 compute-0 nova_compute[255660]: 2025-11-22 05:51:18.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:51:18 compute-0 nova_compute[255660]: 2025-11-22 05:51:18.153 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:18.671+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d6b5119-ad0e-4013-9a5e-284fedb56378' of type subvolume
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7d6b5119-ad0e-4013-9a5e-284fedb56378' of type subvolume
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v921: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7d6b5119-ad0e-4013-9a5e-284fedb56378'' moved to trashcan
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7d6b5119-ad0e-4013-9a5e-284fedb56378, vol_name:cephfs) < ""
Nov 22 05:51:18 compute-0 ceph-mon[75840]: osdmap e134: 3 total, 3 up, 3 in
Nov 22 05:51:19 compute-0 nova_compute[255660]: 2025-11-22 05:51:19.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta.tmp'
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta.tmp' to config b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8/.meta'
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2cac985c-91b9-4f35-a81a-295d69c728b5", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mon[75840]: pgmap v921: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7d6b5119-ad0e-4013-9a5e-284fedb56378", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v922: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "format": "json"}]: dispatch
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta.tmp'
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta.tmp' to config b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030/.meta'
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "format": "json"}]: dispatch
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:21 compute-0 ceph-mon[75840]: pgmap v922: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 21 KiB/s wr, 5 op/s
Nov 22 05:51:21 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:21 compute-0 sudo[261960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:21 compute-0 sudo[261960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:21 compute-0 sudo[261960]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:21 compute-0 sudo[261985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:51:21 compute-0 sudo[261985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[261985]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:22 compute-0 sudo[262010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:22 compute-0 sudo[262010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[262010]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:22 compute-0 sudo[262035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:51:22 compute-0 sudo[262035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[262035]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v923: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 23 KiB/s wr, 6 op/s
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:22 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 2c18c44f-0b52-46b5-8818-67b7682cec59 does not exist
Nov 22 05:51:22 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8e87ce81-d908-43fa-9ad6-6f5489194787 does not exist
Nov 22 05:51:22 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev fe5690d2-1f69-4916-9f8f-8a16026552e1 does not exist
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:51:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "format": "json"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:51:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:51:22 compute-0 sudo[262091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:22 compute-0 sudo[262091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[262091]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:22 compute-0 sudo[262116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:51:22 compute-0 sudo[262116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[262116]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:22 compute-0 sudo[262141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:22 compute-0 rsyslogd[1005]: imjournal from <np0005531754:sudo>: begin to drop messages due to rate-limiting
Nov 22 05:51:22 compute-0 sudo[262141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:22 compute-0 sudo[262141]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:23 compute-0 sudo[262166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:51:23 compute-0 sudo[262166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.433002389 +0000 UTC m=+0.054600696 container create 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:51:23 compute-0 systemd[1]: Started libpod-conmon-3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302.scope.
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.403803485 +0000 UTC m=+0.025401832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.547735937 +0000 UTC m=+0.169334274 container init 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.558817545 +0000 UTC m=+0.180415842 container start 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.564340663 +0000 UTC m=+0.185939030 container attach 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:51:23 compute-0 charming_neumann[262247]: 167 167
Nov 22 05:51:23 compute-0 systemd[1]: libpod-3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302.scope: Deactivated successfully.
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.565681529 +0000 UTC m=+0.187279826 container died 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:51:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7695cfa869a5a7977213a0708b3953355a8052abb7a4035ce07b99e0256b5ff8-merged.mount: Deactivated successfully.
Nov 22 05:51:23 compute-0 podman[262230]: 2025-11-22 05:51:23.66560374 +0000 UTC m=+0.287202017 container remove 3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:51:23 compute-0 systemd[1]: libpod-conmon-3b1dad9817700653b7b6b499abaf982c9ff001a63eea2e439fc53df555263302.scope: Deactivated successfully.
Nov 22 05:51:23 compute-0 ceph-mon[75840]: pgmap v923: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 23 KiB/s wr, 6 op/s
Nov 22 05:51:23 compute-0 podman[262272]: 2025-11-22 05:51:23.884859743 +0000 UTC m=+0.064300666 container create 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:51:23 compute-0 systemd[1]: Started libpod-conmon-744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7.scope.
Nov 22 05:51:23 compute-0 podman[262272]: 2025-11-22 05:51:23.853784649 +0000 UTC m=+0.033225612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:23 compute-0 podman[262272]: 2025-11-22 05:51:23.986554942 +0000 UTC m=+0.165995845 container init 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:51:23 compute-0 podman[262272]: 2025-11-22 05:51:23.998017769 +0000 UTC m=+0.177458652 container start 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 05:51:24 compute-0 podman[262272]: 2025-11-22 05:51:24.001198084 +0000 UTC m=+0.180638967 container attach 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:51:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v924: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 23 KiB/s wr, 6 op/s
Nov 22 05:51:25 compute-0 interesting_panini[262289]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:51:25 compute-0 interesting_panini[262289]: --> relative data size: 1.0
Nov 22 05:51:25 compute-0 interesting_panini[262289]: --> All data devices are unavailable
Nov 22 05:51:25 compute-0 systemd[1]: libpod-744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7.scope: Deactivated successfully.
Nov 22 05:51:25 compute-0 systemd[1]: libpod-744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7.scope: Consumed 1.057s CPU time.
Nov 22 05:51:25 compute-0 podman[262272]: 2025-11-22 05:51:25.111272769 +0000 UTC m=+1.290713682 container died 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-48065b86b988f3724d8b46eb5b273cbbd2d76896802b10fcdcffb1b0b9b5ffe1-merged.mount: Deactivated successfully.
Nov 22 05:51:25 compute-0 podman[262272]: 2025-11-22 05:51:25.181626686 +0000 UTC m=+1.361067569 container remove 744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_panini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:51:25 compute-0 systemd[1]: libpod-conmon-744178c4db0da480732bcd1c9abfb0547ec187e714676066ccc0dab761d646f7.scope: Deactivated successfully.
Nov 22 05:51:25 compute-0 sudo[262166]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "format": "json"}]: dispatch
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c1cff93-cfe8-43e8-b934-82a3cf7b6030' of type subvolume
Nov 22 05:51:25 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:25.221+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c1cff93-cfe8-43e8-b934-82a3cf7b6030' of type subvolume
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c1cff93-cfe8-43e8-b934-82a3cf7b6030'' moved to trashcan
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c1cff93-cfe8-43e8-b934-82a3cf7b6030, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 sudo[262330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:25 compute-0 sudo[262330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:25 compute-0 sudo[262330]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:25 compute-0 sudo[262355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:51:25 compute-0 sudo[262355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:25 compute-0 sudo[262355]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "format": "json"}]: dispatch
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:51:25.401+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f68ab23-30d2-4b25-b726-bc4bc13231e8' of type subvolume
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f68ab23-30d2-4b25-b726-bc4bc13231e8' of type subvolume
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1f68ab23-30d2-4b25-b726-bc4bc13231e8'' moved to trashcan
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:51:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f68ab23-30d2-4b25-b726-bc4bc13231e8, vol_name:cephfs) < ""
Nov 22 05:51:25 compute-0 sudo[262380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:25 compute-0 sudo[262380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:25 compute-0 sudo[262380]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:25 compute-0 sudo[262405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:51:25 compute-0 sudo[262405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:25 compute-0 ceph-mon[75840]: pgmap v924: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 23 KiB/s wr, 6 op/s
Nov 22 05:51:25 compute-0 podman[262471]: 2025-11-22 05:51:25.920149102 +0000 UTC m=+0.058074080 container create 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:51:25 compute-0 systemd[1]: Started libpod-conmon-4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d.scope.
Nov 22 05:51:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:25 compute-0 podman[262471]: 2025-11-22 05:51:25.900385182 +0000 UTC m=+0.038310170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:26 compute-0 podman[262471]: 2025-11-22 05:51:26.003656292 +0000 UTC m=+0.141581300 container init 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:51:26 compute-0 podman[262471]: 2025-11-22 05:51:26.010383553 +0000 UTC m=+0.148308541 container start 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 05:51:26 compute-0 podman[262471]: 2025-11-22 05:51:26.016932799 +0000 UTC m=+0.154857817 container attach 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:51:26 compute-0 vigorous_colden[262488]: 167 167
Nov 22 05:51:26 compute-0 systemd[1]: libpod-4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d.scope: Deactivated successfully.
Nov 22 05:51:26 compute-0 podman[262471]: 2025-11-22 05:51:26.018535512 +0000 UTC m=+0.156460500 container died 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e70df2da695fc87220fac56ffc72c88adc339da16d0e12ded4fecfc281f95596-merged.mount: Deactivated successfully.
Nov 22 05:51:26 compute-0 podman[262471]: 2025-11-22 05:51:26.094275904 +0000 UTC m=+0.232200892 container remove 4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_colden, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:51:26 compute-0 systemd[1]: libpod-conmon-4b06f46fcab6915590b98b9b9524f05cf56aecef59ec659d44dca0acf3f9b17d.scope: Deactivated successfully.
Nov 22 05:51:26 compute-0 podman[262514]: 2025-11-22 05:51:26.297203349 +0000 UTC m=+0.079462624 container create f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:51:26 compute-0 podman[262514]: 2025-11-22 05:51:26.243676353 +0000 UTC m=+0.025935658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:26 compute-0 systemd[1]: Started libpod-conmon-f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568.scope.
Nov 22 05:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773cd95aa59450aec07c961d9a601803c1abc97bcd76f8f4196f6c134e12e82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773cd95aa59450aec07c961d9a601803c1abc97bcd76f8f4196f6c134e12e82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773cd95aa59450aec07c961d9a601803c1abc97bcd76f8f4196f6c134e12e82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c773cd95aa59450aec07c961d9a601803c1abc97bcd76f8f4196f6c134e12e82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:26 compute-0 podman[262514]: 2025-11-22 05:51:26.425418529 +0000 UTC m=+0.207677764 container init f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:51:26 compute-0 podman[262514]: 2025-11-22 05:51:26.43737194 +0000 UTC m=+0.219631175 container start f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:51:26 compute-0 podman[262514]: 2025-11-22 05:51:26.441975613 +0000 UTC m=+0.224234848 container attach f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:51:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v925: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "format": "json"}]: dispatch
Nov 22 05:51:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c1cff93-cfe8-43e8-b934-82a3cf7b6030", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "format": "json"}]: dispatch
Nov 22 05:51:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1f68ab23-30d2-4b25-b726-bc4bc13231e8", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:27 compute-0 clever_moore[262530]: {
Nov 22 05:51:27 compute-0 clever_moore[262530]:     "0": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:         {
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "devices": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "/dev/loop3"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             ],
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_name": "ceph_lv0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_size": "21470642176",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "name": "ceph_lv0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "tags": {
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_name": "ceph",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.crush_device_class": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.encrypted": "0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_id": "0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.vdo": "0"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             },
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "vg_name": "ceph_vg0"
Nov 22 05:51:27 compute-0 clever_moore[262530]:         }
Nov 22 05:51:27 compute-0 clever_moore[262530]:     ],
Nov 22 05:51:27 compute-0 clever_moore[262530]:     "1": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:         {
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "devices": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "/dev/loop4"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             ],
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_name": "ceph_lv1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_size": "21470642176",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "name": "ceph_lv1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "tags": {
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_name": "ceph",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.crush_device_class": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.encrypted": "0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_id": "1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.vdo": "0"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             },
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "vg_name": "ceph_vg1"
Nov 22 05:51:27 compute-0 clever_moore[262530]:         }
Nov 22 05:51:27 compute-0 clever_moore[262530]:     ],
Nov 22 05:51:27 compute-0 clever_moore[262530]:     "2": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:         {
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "devices": [
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "/dev/loop5"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             ],
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_name": "ceph_lv2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_size": "21470642176",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "name": "ceph_lv2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "tags": {
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.cluster_name": "ceph",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.crush_device_class": "",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.encrypted": "0",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osd_id": "2",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:                 "ceph.vdo": "0"
Nov 22 05:51:27 compute-0 clever_moore[262530]:             },
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "type": "block",
Nov 22 05:51:27 compute-0 clever_moore[262530]:             "vg_name": "ceph_vg2"
Nov 22 05:51:27 compute-0 clever_moore[262530]:         }
Nov 22 05:51:27 compute-0 clever_moore[262530]:     ]
Nov 22 05:51:27 compute-0 clever_moore[262530]: }
Nov 22 05:51:27 compute-0 systemd[1]: libpod-f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568.scope: Deactivated successfully.
Nov 22 05:51:27 compute-0 podman[262514]: 2025-11-22 05:51:27.200828793 +0000 UTC m=+0.983088028 container died f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c773cd95aa59450aec07c961d9a601803c1abc97bcd76f8f4196f6c134e12e82-merged.mount: Deactivated successfully.
Nov 22 05:51:27 compute-0 podman[262514]: 2025-11-22 05:51:27.263383412 +0000 UTC m=+1.045642657 container remove f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moore, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 05:51:27 compute-0 systemd[1]: libpod-conmon-f713f165bf0c0944fc4bb5d64ca1122349249f2dc3357e6ded777e8777792568.scope: Deactivated successfully.
Nov 22 05:51:27 compute-0 sudo[262405]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:27 compute-0 podman[262552]: 2025-11-22 05:51:27.375630944 +0000 UTC m=+0.068375316 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 05:51:27 compute-0 sudo[262564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:27 compute-0 podman[262553]: 2025-11-22 05:51:27.383358682 +0000 UTC m=+0.075209060 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd)
Nov 22 05:51:27 compute-0 sudo[262564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:27 compute-0 sudo[262564]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:27 compute-0 sudo[262614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:51:27 compute-0 sudo[262614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:27 compute-0 sudo[262614]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:27 compute-0 sudo[262639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:27 compute-0 sudo[262639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:27 compute-0 sudo[262639]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:27 compute-0 sudo[262664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:51:27 compute-0 sudo[262664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:27 compute-0 ceph-mon[75840]: pgmap v925: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 22 KiB/s wr, 6 op/s
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.001691872 +0000 UTC m=+0.066278529 container create 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:27.964822343 +0000 UTC m=+0.029409010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:28 compute-0 systemd[1]: Started libpod-conmon-9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a.scope.
Nov 22 05:51:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.133774606 +0000 UTC m=+0.198361293 container init 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.143864477 +0000 UTC m=+0.208451134 container start 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:51:28 compute-0 objective_chebyshev[262746]: 167 167
Nov 22 05:51:28 compute-0 systemd[1]: libpod-9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a.scope: Deactivated successfully.
Nov 22 05:51:28 compute-0 conmon[262746]: conmon 9721ad1ff48ec9eba78c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a.scope/container/memory.events
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.151705536 +0000 UTC m=+0.216292233 container attach 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.152345234 +0000 UTC m=+0.216931891 container died 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c39825535c41d69c91d8c9b8b16127ef08a22ede0443e3470899c92889c11db-merged.mount: Deactivated successfully.
Nov 22 05:51:28 compute-0 podman[262730]: 2025-11-22 05:51:28.258144813 +0000 UTC m=+0.322731460 container remove 9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chebyshev, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:51:28 compute-0 systemd[1]: libpod-conmon-9721ad1ff48ec9eba78cb3794bd1a182bac0f8bea949d746ee7b259ecac37c7a.scope: Deactivated successfully.
Nov 22 05:51:28 compute-0 podman[262772]: 2025-11-22 05:51:28.490871527 +0000 UTC m=+0.054324968 container create 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:51:28 compute-0 systemd[1]: Started libpod-conmon-48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c.scope.
Nov 22 05:51:28 compute-0 podman[262772]: 2025-11-22 05:51:28.470762017 +0000 UTC m=+0.034215498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:51:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaa9df3c6c98aae437f27382d73e87b1cb9f730d4d6365cc119a4aec3f24d15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaa9df3c6c98aae437f27382d73e87b1cb9f730d4d6365cc119a4aec3f24d15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaa9df3c6c98aae437f27382d73e87b1cb9f730d4d6365cc119a4aec3f24d15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aaa9df3c6c98aae437f27382d73e87b1cb9f730d4d6365cc119a4aec3f24d15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:51:28 compute-0 podman[262772]: 2025-11-22 05:51:28.612007397 +0000 UTC m=+0.175460858 container init 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:51:28 compute-0 podman[262772]: 2025-11-22 05:51:28.625030707 +0000 UTC m=+0.188484158 container start 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:51:28 compute-0 podman[262772]: 2025-11-22 05:51:28.636123094 +0000 UTC m=+0.199576595 container attach 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:51:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v926: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 654 B/s rd, 20 KiB/s wr, 5 op/s
Nov 22 05:51:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b", "format": "json"}]: dispatch
Nov 22 05:51:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:28 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 05:51:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:28 compute-0 ceph-mon[75840]: pgmap v926: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 654 B/s rd, 20 KiB/s wr, 5 op/s
Nov 22 05:51:28 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b", "format": "json"}]: dispatch
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]: {
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_id": 1,
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "type": "bluestore"
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     },
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_id": 2,
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "type": "bluestore"
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     },
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_id": 0,
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:         "type": "bluestore"
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]:     }
Nov 22 05:51:29 compute-0 intelligent_pascal[262788]: }
Nov 22 05:51:29 compute-0 systemd[1]: libpod-48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c.scope: Deactivated successfully.
Nov 22 05:51:29 compute-0 systemd[1]: libpod-48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c.scope: Consumed 1.076s CPU time.
Nov 22 05:51:29 compute-0 conmon[262788]: conmon 48b7cc34486810b62cfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c.scope/container/memory.events
Nov 22 05:51:29 compute-0 podman[262772]: 2025-11-22 05:51:29.698721724 +0000 UTC m=+1.262175175 container died 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 22 05:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aaa9df3c6c98aae437f27382d73e87b1cb9f730d4d6365cc119a4aec3f24d15-merged.mount: Deactivated successfully.
Nov 22 05:51:30 compute-0 podman[262772]: 2025-11-22 05:51:30.285986082 +0000 UTC m=+1.849439533 container remove 48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:51:30 compute-0 systemd[1]: libpod-conmon-48b7cc34486810b62cfab8fce382af897a211962f5048c8847113f6c582bc13c.scope: Deactivated successfully.
Nov 22 05:51:30 compute-0 sudo[262664]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:51:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:51:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:30 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8ce8d748-ee6f-4e88-bfbf-371c8a485d4c does not exist
Nov 22 05:51:30 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5f062210-f4d6-4fd2-8599-deeff04c0182 does not exist
Nov 22 05:51:30 compute-0 sudo[262833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:51:30 compute-0 sudo[262833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:30 compute-0 sudo[262833]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:30 compute-0 sudo[262858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:51:30 compute-0 sudo[262858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:51:30 compute-0 sudo[262858]: pam_unix(sudo:session): session closed for user root
Nov 22 05:51:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v927: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 18 KiB/s wr, 5 op/s
Nov 22 05:51:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:51:31 compute-0 ceph-mon[75840]: pgmap v927: 321 pgs: 321 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 18 KiB/s wr, 5 op/s
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b_31459069-a7a6-4f0a-9109-45df237fe792", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b_31459069-a7a6-4f0a-9109-45df237fe792, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b_31459069-a7a6-4f0a-9109-45df237fe792, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fcd19511-0693-413f-9530-f3c80d5b7b7b, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v928: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 21 KiB/s wr, 7 op/s
Nov 22 05:51:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b_31459069-a7a6-4f0a-9109-45df237fe792", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "fcd19511-0693-413f-9530-f3c80d5b7b7b", "force": true, "format": "json"}]: dispatch
Nov 22 05:51:33 compute-0 ceph-mon[75840]: pgmap v928: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 21 KiB/s wr, 7 op/s
Nov 22 05:51:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v929: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.9 KiB/s wr, 3 op/s
Nov 22 05:51:35 compute-0 ceph-mon[75840]: pgmap v929: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.9 KiB/s wr, 3 op/s
Nov 22 05:51:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v930: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 13 KiB/s wr, 3 op/s
Nov 22 05:51:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:51:36.931 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:51:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:51:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:51:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:51:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:51:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:37 compute-0 ceph-mon[75840]: pgmap v930: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 13 KiB/s wr, 3 op/s
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e41b5283-9e15-4f8b-9973-4c9089dccbfc/.meta.tmp'
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e41b5283-9e15-4f8b-9973-4c9089dccbfc/.meta.tmp' to config b'/volumes/_nogroup/e41b5283-9e15-4f8b-9973-4c9089dccbfc/.meta'
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "format": "json"}]: dispatch
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:51:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:38 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v931: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 7.6 KiB/s wr, 3 op/s
Nov 22 05:51:38 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "format": "json"}]: dispatch
Nov 22 05:51:39 compute-0 ceph-mon[75840]: pgmap v931: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 7.6 KiB/s wr, 3 op/s
Nov 22 05:51:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v932: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 3 op/s
Nov 22 05:51:41 compute-0 ceph-mon[75840]: pgmap v932: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 3 op/s
Nov 22 05:51:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131", "format": "json"}]: dispatch
Nov 22 05:51:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp'
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp' to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta'
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "format": "json"}]: dispatch
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v933: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 4 op/s
Nov 22 05:51:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 22 05:51:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 22 05:51:42 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:42 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:51:43
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:51:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131", "format": "json"}]: dispatch
Nov 22 05:51:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "format": "json"}]: dispatch
Nov 22 05:51:43 compute-0 ceph-mon[75840]: pgmap v933: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 4 op/s
Nov 22 05:51:43 compute-0 ceph-mon[75840]: osdmap e135: 3 total, 3 up, 3 in
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:51:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v935: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.9 KiB/s wr, 2 op/s
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/.meta.tmp'
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/.meta.tmp' to config b'/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/.meta'
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "format": "json"}]: dispatch
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:51:44 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:44 compute-0 ceph-mon[75840]: pgmap v935: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.9 KiB/s wr, 2 op/s
Nov 22 05:51:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:51:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:51:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "format": "json"}]: dispatch
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7", "format": "json"}]: dispatch
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc", "format": "json"}]: dispatch
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:51:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v936: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 18 KiB/s wr, 4 op/s
Nov 22 05:51:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7", "format": "json"}]: dispatch
Nov 22 05:51:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc", "format": "json"}]: dispatch
Nov 22 05:51:47 compute-0 ceph-mon[75840]: pgmap v936: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 18 KiB/s wr, 4 op/s
Nov 22 05:51:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:51:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507824815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:51:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:51:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1507824815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:51:47 compute-0 podman[262883]: 2025-11-22 05:51:47.355417863 +0000 UTC m=+0.204587580 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:51:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1507824815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:51:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1507824815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:51:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:51:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:51:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 22 05:51:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 22 05:51:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID eve49 with tenant 1218c5e5dd6949df8f550c000dc3c24e
Nov 22 05:51:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:51:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:51:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:51:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:51:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v937: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 3 op/s
Nov 22 05:51:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:51:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 22 05:51:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:51:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:51:49 compute-0 ceph-mon[75840]: pgmap v937: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 3 op/s
Nov 22 05:51:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 3 op/s
Nov 22 05:51:50 compute-0 ceph-mon[75840]: pgmap v938: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 3 op/s
Nov 22 05:51:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be", "format": "json"}]: dispatch
Nov 22 05:51:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:51:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:51:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 22 05:51:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 22 05:51:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID eve48 with tenant 1218c5e5dd6949df8f550c000dc3c24e
Nov 22 05:51:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be", "format": "json"}]: dispatch
Nov 22 05:51:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 22 05:51:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:51:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:51:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 4 op/s
Nov 22 05:51:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 22 05:51:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 22 05:51:52 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 22 05:51:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:51:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:51:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:51:52 compute-0 ceph-mon[75840]: pgmap v939: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 4 op/s
Nov 22 05:51:52 compute-0 ceph-mon[75840]: osdmap e136: 3 total, 3 up, 3 in
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.430790925962102e-05 of space, bias 4.0, pg target 0.017169491111545223 quantized to 16 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:51:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:51:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 4 op/s
Nov 22 05:51:55 compute-0 ceph-mon[75840]: pgmap v941: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 4 op/s
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa", "format": "json"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 4 op/s
Nov 22 05:51:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 22 05:51:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) v1
Nov 22 05:51:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 22 05:51:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08
Nov 22 05:51:56 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08],prefix=session evict} (starting...)
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:51:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:51:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:51:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa", "format": "json"}]: dispatch
Nov 22 05:51:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 22 05:51:57 compute-0 ceph-mon[75840]: pgmap v942: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 4 op/s
Nov 22 05:51:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 22 05:51:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 22 05:51:57 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:51:57.952 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:51:57 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:51:57.953 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:51:58 compute-0 podman[262913]: 2025-11-22 05:51:58.218294206 +0000 UTC m=+0.074025336 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 22 05:51:58 compute-0 podman[262912]: 2025-11-22 05:51:58.242404824 +0000 UTC m=+0.097874708 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:51:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 4 op/s
Nov 22 05:51:58 compute-0 ceph-mon[75840]: pgmap v943: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s wr, 4 op/s
Nov 22 05:51:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:51:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:51:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 22 05:51:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 22 05:51:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID eve47 with tenant 1218c5e5dd6949df8f550c000dc3c24e
Nov 22 05:51:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:51:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:51:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 22 05:51:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, tenant_id:1218c5e5dd6949df8f550c000dc3c24e, vol_name:cephfs) < ""
Nov 22 05:52:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 5 op/s
Nov 22 05:52:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c", "format": "json"}]: dispatch
Nov 22 05:52:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "tenant_id": "1218c5e5dd6949df8f550c000dc3c24e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:01 compute-0 ceph-mon[75840]: pgmap v944: 321 pgs: 321 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s wr, 5 op/s
Nov 22 05:52:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c", "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp' to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:01 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc_6333f3c4-9243-4f46-8066-55fc353398c5", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc_6333f3c4-9243-4f46-8066-55fc353398c5, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp' to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc_6333f3c4-9243-4f46-8066-55fc353398c5, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta.tmp' to config b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123/.meta'
Nov 22 05:52:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:72703134-9023-42d9-b1e8-9374f84d84cc, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "format": "json"}]: dispatch
Nov 22 05:52:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc_6333f3c4-9243-4f46-8066-55fc353398c5", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "snap_name": "72703134-9023-42d9-b1e8-9374f84d84cc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 39 KiB/s wr, 8 op/s
Nov 22 05:52:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:52:03 compute-0 ceph-mon[75840]: pgmap v945: 321 pgs: 321 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 39 KiB/s wr, 8 op/s
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:43b15887-e470-4d86-be86-556764cf9b47, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:43b15887-e470-4d86-be86-556764cf9b47, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 22 05:52:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) v1
Nov 22 05:52:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 33 KiB/s wr, 7 op/s
Nov 22 05:52:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08
Nov 22 05:52:04 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08],prefix=session evict} (starting...)
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f7e75bd1-474d-4276-b05f-3f57f661f123, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f7e75bd1-474d-4276-b05f-3f57f661f123, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:04.922+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7e75bd1-474d-4276-b05f-3f57f661f123' of type subvolume
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f7e75bd1-474d-4276-b05f-3f57f661f123' of type subvolume
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f7e75bd1-474d-4276-b05f-3f57f661f123'' moved to trashcan
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f7e75bd1-474d-4276-b05f-3f57f661f123, vol_name:cephfs) < ""
Nov 22 05:52:04 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:52:04.955 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c_c83418fc-b7ae-4ce7-b133-d4b0a2f96738", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c_c83418fc-b7ae-4ce7-b133-d4b0a2f96738, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c_c83418fc-b7ae-4ce7-b133-d4b0a2f96738, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79d46880-5e77-4edf-a5df-f85c46a3035c, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47", "format": "json"}]: dispatch
Nov 22 05:52:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 22 05:52:05 compute-0 ceph-mon[75840]: pgmap v946: 321 pgs: 321 active+clean; 43 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 33 KiB/s wr, 7 op/s
Nov 22 05:52:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 22 05:52:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 52 KiB/s wr, 9 op/s
Nov 22 05:52:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "format": "json"}]: dispatch
Nov 22 05:52:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f7e75bd1-474d-4276-b05f-3f57f661f123", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c_c83418fc-b7ae-4ce7-b133-d4b0a2f96738", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "79d46880-5e77-4edf-a5df-f85c46a3035c", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:52:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 22 05:52:07 compute-0 ceph-mon[75840]: pgmap v947: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 52 KiB/s wr, 9 op/s
Nov 22 05:52:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 22 05:52:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa_79c53c5a-19fb-4b76-97fa-30294fb2fef4", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa_79c53c5a-19fb-4b76-97fa-30294fb2fef4, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa_79c53c5a-19fb-4b76-97fa-30294fb2fef4, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eed29854-f0de-4f56-8a27-13ce2bfe06fa, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:08.555+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e41b5283-9e15-4f8b-9973-4c9089dccbfc' of type subvolume
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e41b5283-9e15-4f8b-9973-4c9089dccbfc' of type subvolume
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e41b5283-9e15-4f8b-9973-4c9089dccbfc'' moved to trashcan
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e41b5283-9e15-4f8b-9973-4c9089dccbfc, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a", "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 10 op/s
Nov 22 05:52:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 22 05:52:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 22 05:52:08 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 22 05:52:08 compute-0 ceph-mon[75840]: osdmap e137: 3 total, 3 up, 3 in
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 22 05:52:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 22 05:52:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) v1
Nov 22 05:52:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08
Nov 22 05:52:09 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd/c7221b79-0f2c-4f93-9854-768c7ef67f08],prefix=session evict} (starting...)
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:09.308+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd' of type subvolume
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd' of type subvolume
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd'' moved to trashcan
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd, vol_name:cephfs) < ""
Nov 22 05:52:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 22 05:52:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 22 05:52:09 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa_79c53c5a-19fb-4b76-97fa-30294fb2fef4", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "eed29854-f0de-4f56-8a27-13ce2bfe06fa", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e41b5283-9e15-4f8b-9973-4c9089dccbfc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: pgmap v949: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 10 op/s
Nov 22 05:52:09 compute-0 ceph-mon[75840]: osdmap e138: 3 total, 3 up, 3 in
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 22 05:52:09 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 22 05:52:09 compute-0 ceph-mon[75840]: osdmap e139: 3 total, 3 up, 3 in
Nov 22 05:52:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 39 KiB/s wr, 8 op/s
Nov 22 05:52:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 22 05:52:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 22 05:52:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "format": "json"}]: dispatch
Nov 22 05:52:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bbda1cfd-a62a-4d0a-87fc-e66cc4fb1bbd", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:11 compute-0 ceph-mon[75840]: pgmap v952: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 39 KiB/s wr, 8 op/s
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be_e0c50c16-9008-44de-a6d9-9420a0cf8d15", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be_e0c50c16-9008-44de-a6d9-9420a0cf8d15, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be_e0c50c16-9008-44de-a6d9-9420a0cf8d15, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 67 KiB/s wr, 13 op/s
Nov 22 05:52:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:52:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a_9c40e4f1-cfe4-4e0b-b18c-817da5de7558", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a_9c40e4f1-cfe4-4e0b-b18c-817da5de7558, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be_e0c50c16-9008-44de-a6d9-9420a0cf8d15", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "e8012c7d-6e57-4d5f-8dd3-7b9d07a9c2be", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:13 compute-0 ceph-mon[75840]: pgmap v953: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 67 KiB/s wr, 13 op/s
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp'
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp' to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta'
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a_9c40e4f1-cfe4-4e0b-b18c-817da5de7558, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.215 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.215 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.216 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.216 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.217 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp'
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp' to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta'
Nov 22 05:52:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:52:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114743137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.682 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b447fbc6-f84c-471a-b434-4cf442d5d06a, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.904 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.905 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.905 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:52:13 compute-0 nova_compute[255660]: 2025-11-22 05:52:13.905 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:52:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a_9c40e4f1-cfe4-4e0b-b18c-817da5de7558", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "b447fbc6-f84c-471a-b434-4cf442d5d06a", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3114743137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:52:14 compute-0 nova_compute[255660]: 2025-11-22 05:52:14.513 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:52:14 compute-0 nova_compute[255660]: 2025-11-22 05:52:14.513 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:52:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 739 B/s rd, 57 KiB/s wr, 9 op/s
Nov 22 05:52:14 compute-0 nova_compute[255660]: 2025-11-22 05:52:14.901 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:52:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:52:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110080219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:52:15 compute-0 nova_compute[255660]: 2025-11-22 05:52:15.429 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:52:15 compute-0 nova_compute[255660]: 2025-11-22 05:52:15.439 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7_9f1c2860-61dc-4837-880d-0143164bc5fc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7_9f1c2860-61dc-4837-880d-0143164bc5fc, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:15 compute-0 nova_compute[255660]: 2025-11-22 05:52:15.492 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:52:15 compute-0 nova_compute[255660]: 2025-11-22 05:52:15.495 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:52:15 compute-0 nova_compute[255660]: 2025-11-22 05:52:15.495 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7_9f1c2860-61dc-4837-880d-0143164bc5fc, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:15 compute-0 ceph-mon[75840]: pgmap v954: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 739 B/s rd, 57 KiB/s wr, 9 op/s
Nov 22 05:52:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2110080219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4986e2cb-c9fa-4cc5-9de1-755b64f9eec7, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:16 compute-0 nova_compute[255660]: 2025-11-22 05:52:16.497 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47_979934f6-2426-4486-ba3a-3345281db1dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:43b15887-e470-4d86-be86-556764cf9b47_979934f6-2426-4486-ba3a-3345281db1dc, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp'
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp' to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta'
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:43b15887-e470-4d86-be86-556764cf9b47_979934f6-2426-4486-ba3a-3345281db1dc, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:43b15887-e470-4d86-be86-556764cf9b47, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 87 KiB/s wr, 12 op/s
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp'
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta.tmp' to config b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b/.meta'
Nov 22 05:52:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7_9f1c2860-61dc-4837-880d-0143164bc5fc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "4986e2cb-c9fa-4cc5-9de1-755b64f9eec7", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:43b15887-e470-4d86-be86-556764cf9b47, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:17 compute-0 nova_compute[255660]: 2025-11-22 05:52:17.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:17 compute-0 nova_compute[255660]: 2025-11-22 05:52:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:17 compute-0 nova_compute[255660]: 2025-11-22 05:52:17.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:17 compute-0 nova_compute[255660]: 2025-11-22 05:52:17.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:52:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 05:52:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 22 05:52:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 22 05:52:17 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 22 05:52:17 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47_979934f6-2426-4486-ba3a-3345281db1dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:17 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "snap_name": "43b15887-e470-4d86-be86-556764cf9b47", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:17 compute-0 ceph-mon[75840]: pgmap v955: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 87 KiB/s wr, 12 op/s
Nov 22 05:52:18 compute-0 podman[263000]: 2025-11-22 05:52:18.283933108 +0000 UTC m=+0.131168300 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 79 KiB/s wr, 12 op/s
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131_b86c0998-2bb5-4425-91b4-757abf28bbdc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:18 compute-0 ceph-mon[75840]: osdmap e140: 3 total, 3 up, 3 in
Nov 22 05:52:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 22 05:52:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 22 05:52:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131_b86c0998-2bb5-4425-91b4-757abf28bbdc, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131_b86c0998-2bb5-4425-91b4-757abf28bbdc, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp'
Nov 22 05:52:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta.tmp' to config b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec/.meta'
Nov 22 05:52:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.147 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:19 compute-0 nova_compute[255660]: 2025-11-22 05:52:19.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:19 compute-0 ceph-mon[75840]: pgmap v957: 321 pgs: 321 active+clean; 43 MiB data, 209 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 79 KiB/s wr, 12 op/s
Nov 22 05:52:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131_b86c0998-2bb5-4425-91b4-757abf28bbdc", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:19 compute-0 ceph-mon[75840]: osdmap e141: 3 total, 3 up, 3 in
Nov 22 05:52:20 compute-0 nova_compute[255660]: 2025-11-22 05:52:20.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "format": "json"}]: dispatch
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '77ba3a4e-8b78-433d-9fcd-08fc8c49251b' of type subvolume
Nov 22 05:52:20 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:20.174+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '77ba3a4e-8b78-433d-9fcd-08fc8c49251b' of type subvolume
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/77ba3a4e-8b78-433d-9fcd-08fc8c49251b'' moved to trashcan
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:77ba3a4e-8b78-433d-9fcd-08fc8c49251b, vol_name:cephfs) < ""
Nov 22 05:52:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 43 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 38 KiB/s wr, 5 op/s
Nov 22 05:52:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "snap_name": "a9d28ae0-d3cd-4cd7-a310-a66b9a7b4131", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "format": "json"}]: dispatch
Nov 22 05:52:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "77ba3a4e-8b78-433d-9fcd-08fc8c49251b", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:20 compute-0 ceph-mon[75840]: pgmap v959: 321 pgs: 321 active+clean; 43 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 38 KiB/s wr, 5 op/s
Nov 22 05:52:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 22 05:52:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 22 05:52:22 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 58 KiB/s wr, 10 op/s
Nov 22 05:52:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "format": "json"}]: dispatch
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:af20cd9a-8203-491f-b76d-599ebd8046ec, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:af20cd9a-8203-491f-b76d-599ebd8046ec, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:22 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:22.955+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af20cd9a-8203-491f-b76d-599ebd8046ec' of type subvolume
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'af20cd9a-8203-491f-b76d-599ebd8046ec' of type subvolume
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/af20cd9a-8203-491f-b76d-599ebd8046ec'' moved to trashcan
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:af20cd9a-8203-491f-b76d-599ebd8046ec, vol_name:cephfs) < ""
Nov 22 05:52:23 compute-0 ceph-mon[75840]: osdmap e142: 3 total, 3 up, 3 in
Nov 22 05:52:23 compute-0 ceph-mon[75840]: pgmap v961: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 58 KiB/s wr, 10 op/s
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/edcd7366-8dc9-4cca-b6b7-5918f501a054/.meta.tmp'
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/edcd7366-8dc9-4cca-b6b7-5918f501a054/.meta.tmp' to config b'/volumes/_nogroup/edcd7366-8dc9-4cca-b6b7-5918f501a054/.meta'
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "format": "json"}]: dispatch
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:24 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 50 KiB/s wr, 7 op/s
Nov 22 05:52:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "format": "json"}]: dispatch
Nov 22 05:52:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af20cd9a-8203-491f-b76d-599ebd8046ec", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "format": "json"}]: dispatch
Nov 22 05:52:25 compute-0 ceph-mon[75840]: pgmap v962: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 50 KiB/s wr, 7 op/s
Nov 22 05:52:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 62 KiB/s wr, 8 op/s
Nov 22 05:52:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 22 05:52:27 compute-0 ceph-mon[75840]: pgmap v963: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 62 KiB/s wr, 8 op/s
Nov 22 05:52:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 22 05:52:27 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 22 05:52:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 63 KiB/s wr, 9 op/s
Nov 22 05:52:28 compute-0 ceph-mon[75840]: osdmap e143: 3 total, 3 up, 3 in
Nov 22 05:52:29 compute-0 podman[263028]: 2025-11-22 05:52:29.217832148 +0000 UTC m=+0.072538742 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 05:52:29 compute-0 podman[263029]: 2025-11-22 05:52:29.232263781 +0000 UTC m=+0.081703201 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 05:52:29 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "format": "json"}]: dispatch
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:29.805+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'edcd7366-8dc9-4cca-b6b7-5918f501a054' of type subvolume
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'edcd7366-8dc9-4cca-b6b7-5918f501a054' of type subvolume
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/edcd7366-8dc9-4cca-b6b7-5918f501a054'' moved to trashcan
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:edcd7366-8dc9-4cca-b6b7-5918f501a054, vol_name:cephfs) < ""
Nov 22 05:52:29 compute-0 ceph-mon[75840]: pgmap v965: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 63 KiB/s wr, 9 op/s
Nov 22 05:52:30 compute-0 sudo[263069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:30 compute-0 sudo[263069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:30 compute-0 sudo[263069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 380 B/s rd, 20 KiB/s wr, 3 op/s
Nov 22 05:52:30 compute-0 sudo[263094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:52:30 compute-0 sudo[263094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:30 compute-0 sudo[263094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:30 compute-0 sudo[263119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:30 compute-0 sudo[263119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:30 compute-0 sudo[263119]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "format": "json"}]: dispatch
Nov 22 05:52:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "edcd7366-8dc9-4cca-b6b7-5918f501a054", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:30 compute-0 sudo[263144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:52:30 compute-0 sudo[263144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:31 compute-0 sudo[263144]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:31 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a5e03037-6f55-4c20-8bef-aec5faa9b71e does not exist
Nov 22 05:52:31 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a850fa2d-120b-4175-9243-18d4cffd3bff does not exist
Nov 22 05:52:31 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9a481dfa-65f0-4e0c-b8ba-3b952dbd22dc does not exist
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:52:31 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:52:31 compute-0 sshd-session[263067]: Invalid user sol from 80.94.92.182 port 37854
Nov 22 05:52:31 compute-0 sudo[263198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:31 compute-0 sudo[263198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:31 compute-0 sudo[263198]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:31 compute-0 sudo[263223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:52:31 compute-0 sudo[263223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:31 compute-0 sudo[263223]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:31 compute-0 sudo[263248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:31 compute-0 sudo[263248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:31 compute-0 sudo[263248]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:31 compute-0 sshd-session[263067]: Connection closed by invalid user sol 80.94.92.182 port 37854 [preauth]
Nov 22 05:52:31 compute-0 sudo[263273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:52:31 compute-0 sudo[263273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:31 compute-0 ceph-mon[75840]: pgmap v966: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 380 B/s rd, 20 KiB/s wr, 3 op/s
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:52:31 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:52:32 compute-0 podman[263340]: 2025-11-22 05:52:32.266365906 +0000 UTC m=+0.028568940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:32 compute-0 podman[263340]: 2025-11-22 05:52:32.533787025 +0000 UTC m=+0.295990009 container create bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:52:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 35 KiB/s wr, 4 op/s
Nov 22 05:52:32 compute-0 systemd[1]: Started libpod-conmon-bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24.scope.
Nov 22 05:52:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 22 05:52:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 22 05:52:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:32 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 22 05:52:32 compute-0 podman[263340]: 2025-11-22 05:52:32.956982826 +0000 UTC m=+0.719185800 container init bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:52:32 compute-0 podman[263340]: 2025-11-22 05:52:32.9666687 +0000 UTC m=+0.728871674 container start bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:52:32 compute-0 systemd[1]: libpod-bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24.scope: Deactivated successfully.
Nov 22 05:52:32 compute-0 inspiring_pascal[263356]: 167 167
Nov 22 05:52:32 compute-0 conmon[263356]: conmon bcbc7beb4f91324c0aa2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24.scope/container/memory.events
Nov 22 05:52:33 compute-0 podman[263340]: 2025-11-22 05:52:33.109694004 +0000 UTC m=+0.871896998 container attach bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:52:33 compute-0 podman[263340]: 2025-11-22 05:52:33.113344174 +0000 UTC m=+0.875547168 container died bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9fa84155dd15e480bd2a6b834839d5b83047570693fb6998a42db93b76e63a7-merged.mount: Deactivated successfully.
Nov 22 05:52:33 compute-0 podman[263340]: 2025-11-22 05:52:33.352167912 +0000 UTC m=+1.114370866 container remove bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:52:33 compute-0 systemd[1]: libpod-conmon-bcbc7beb4f91324c0aa2319b3a084a6b4e95d77b6fc445c6d240a9dfee7d5f24.scope: Deactivated successfully.
Nov 22 05:52:33 compute-0 podman[263381]: 2025-11-22 05:52:33.582052568 +0000 UTC m=+0.071864043 container create 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 05:52:33 compute-0 systemd[1]: Started libpod-conmon-7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97.scope.
Nov 22 05:52:33 compute-0 podman[263381]: 2025-11-22 05:52:33.553108507 +0000 UTC m=+0.042920052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:33 compute-0 podman[263381]: 2025-11-22 05:52:33.701716233 +0000 UTC m=+0.191527708 container init 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 05:52:33 compute-0 podman[263381]: 2025-11-22 05:52:33.713933117 +0000 UTC m=+0.203744592 container start 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:52:33 compute-0 podman[263381]: 2025-11-22 05:52:33.72063156 +0000 UTC m=+0.210443085 container attach 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:52:33 compute-0 ceph-mon[75840]: pgmap v967: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 35 KiB/s wr, 4 op/s
Nov 22 05:52:33 compute-0 ceph-mon[75840]: osdmap e144: 3 total, 3 up, 3 in
Nov 22 05:52:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 25 KiB/s wr, 2 op/s
Nov 22 05:52:34 compute-0 quirky_fermi[263398]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:52:34 compute-0 quirky_fermi[263398]: --> relative data size: 1.0
Nov 22 05:52:34 compute-0 quirky_fermi[263398]: --> All data devices are unavailable
Nov 22 05:52:34 compute-0 systemd[1]: libpod-7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97.scope: Deactivated successfully.
Nov 22 05:52:34 compute-0 systemd[1]: libpod-7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97.scope: Consumed 1.097s CPU time.
Nov 22 05:52:34 compute-0 podman[263381]: 2025-11-22 05:52:34.857249663 +0000 UTC m=+1.347061148 container died 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-78a5b36382d6e9f95b1cdd73539618a9c3d95942ccc1b688213039605e31e8dd-merged.mount: Deactivated successfully.
Nov 22 05:52:34 compute-0 podman[263381]: 2025-11-22 05:52:34.923063579 +0000 UTC m=+1.412875094 container remove 7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_fermi, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:52:34 compute-0 systemd[1]: libpod-conmon-7b937ab83c9b64ffaf139564b186d619be75fdc4c0da0d56a4561098e121ea97.scope: Deactivated successfully.
Nov 22 05:52:34 compute-0 sudo[263273]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:35 compute-0 sudo[263440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:35 compute-0 sudo[263440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:35 compute-0 sudo[263440]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:35 compute-0 sudo[263465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:52:35 compute-0 sudo[263465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:35 compute-0 sudo[263465]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:35 compute-0 sudo[263490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:35 compute-0 sudo[263490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:35 compute-0 sudo[263490]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:35 compute-0 sudo[263515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:52:35 compute-0 sudo[263515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.692594334 +0000 UTC m=+0.064029399 container create b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:52:35 compute-0 systemd[1]: Started libpod-conmon-b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4.scope.
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.666952404 +0000 UTC m=+0.038387529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.78954168 +0000 UTC m=+0.160976805 container init b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.799912073 +0000 UTC m=+0.171347148 container start b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.805027802 +0000 UTC m=+0.176462877 container attach b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:52:35 compute-0 frosty_volhard[263596]: 167 167
Nov 22 05:52:35 compute-0 systemd[1]: libpod-b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4.scope: Deactivated successfully.
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.807024157 +0000 UTC m=+0.178459232 container died b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 05:52:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-79d742d2b9562431ce8c4a7472df606320241e961303d9a9d525e45420dd4f72-merged.mount: Deactivated successfully.
Nov 22 05:52:35 compute-0 ceph-mon[75840]: pgmap v969: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 25 KiB/s wr, 2 op/s
Nov 22 05:52:35 compute-0 podman[263580]: 2025-11-22 05:52:35.861554016 +0000 UTC m=+0.232989071 container remove b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:52:35 compute-0 systemd[1]: libpod-conmon-b990a3dcb365cbcffb2067cadade04ef2bc2c77f5e750668090c64932891e4c4.scope: Deactivated successfully.
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5/.meta.tmp'
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5/.meta.tmp' to config b'/volumes/_nogroup/7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5/.meta'
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "format": "json"}]: dispatch
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:35 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.049261819 +0000 UTC m=+0.047972151 container create b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 05:52:36 compute-0 systemd[1]: Started libpod-conmon-b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1.scope.
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.026862158 +0000 UTC m=+0.025572510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93bb9a9010b272262036889da211f97eb72d3119af257d2114dab2e37d0ab6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93bb9a9010b272262036889da211f97eb72d3119af257d2114dab2e37d0ab6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93bb9a9010b272262036889da211f97eb72d3119af257d2114dab2e37d0ab6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d93bb9a9010b272262036889da211f97eb72d3119af257d2114dab2e37d0ab6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.143997784 +0000 UTC m=+0.142708176 container init b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.1577527 +0000 UTC m=+0.156463042 container start b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.164249537 +0000 UTC m=+0.162959879 container attach b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:52:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Nov 22 05:52:36 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:52:36.932 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:52:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:52:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:52:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:52:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:52:36 compute-0 goofy_williamson[263635]: {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     "0": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "devices": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "/dev/loop3"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             ],
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_name": "ceph_lv0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_size": "21470642176",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "name": "ceph_lv0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "tags": {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_name": "ceph",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.crush_device_class": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.encrypted": "0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_id": "0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.vdo": "0"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             },
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "vg_name": "ceph_vg0"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         }
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     ],
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     "1": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "devices": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "/dev/loop4"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             ],
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_name": "ceph_lv1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_size": "21470642176",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "name": "ceph_lv1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "tags": {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_name": "ceph",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.crush_device_class": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.encrypted": "0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_id": "1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.vdo": "0"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             },
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "vg_name": "ceph_vg1"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         }
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     ],
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     "2": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "devices": [
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "/dev/loop5"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             ],
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_name": "ceph_lv2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_size": "21470642176",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "name": "ceph_lv2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "tags": {
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.cluster_name": "ceph",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.crush_device_class": "",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.encrypted": "0",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osd_id": "2",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:                 "ceph.vdo": "0"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             },
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "type": "block",
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:             "vg_name": "ceph_vg2"
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:         }
Nov 22 05:52:36 compute-0 goofy_williamson[263635]:     ]
Nov 22 05:52:36 compute-0 goofy_williamson[263635]: }
Nov 22 05:52:36 compute-0 podman[263619]: 2025-11-22 05:52:36.983963001 +0000 UTC m=+0.982673383 container died b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:52:36 compute-0 systemd[1]: libpod-b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1.scope: Deactivated successfully.
Nov 22 05:52:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d93bb9a9010b272262036889da211f97eb72d3119af257d2114dab2e37d0ab6a-merged.mount: Deactivated successfully.
Nov 22 05:52:37 compute-0 podman[263619]: 2025-11-22 05:52:37.042235552 +0000 UTC m=+1.040945854 container remove b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williamson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:52:37 compute-0 systemd[1]: libpod-conmon-b2ec6ef70365ea50f1ac1c6ecc7b4b25c7e26c4b6816213ed4dae23d437976d1.scope: Deactivated successfully.
Nov 22 05:52:37 compute-0 sudo[263515]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:37 compute-0 sudo[263658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:37 compute-0 sudo[263658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:37 compute-0 sudo[263658]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:37 compute-0 sudo[263683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:52:37 compute-0 sudo[263683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:37 compute-0 sudo[263683]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:37 compute-0 sudo[263708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:37 compute-0 sudo[263708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:37 compute-0 sudo[263708]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:37 compute-0 sudo[263733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:52:37 compute-0 sudo[263733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "format": "json"}]: dispatch
Nov 22 05:52:37 compute-0 ceph-mon[75840]: pgmap v970: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Nov 22 05:52:37 compute-0 podman[263798]: 2025-11-22 05:52:37.916942837 +0000 UTC m=+0.072348006 container create 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:52:37 compute-0 systemd[1]: Started libpod-conmon-4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19.scope.
Nov 22 05:52:37 compute-0 podman[263798]: 2025-11-22 05:52:37.885533229 +0000 UTC m=+0.040938438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:37 compute-0 podman[263798]: 2025-11-22 05:52:37.997415563 +0000 UTC m=+0.152820742 container init 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:52:38 compute-0 podman[263798]: 2025-11-22 05:52:38.005538465 +0000 UTC m=+0.160943624 container start 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:52:38 compute-0 brave_rosalind[263815]: 167 167
Nov 22 05:52:38 compute-0 systemd[1]: libpod-4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19.scope: Deactivated successfully.
Nov 22 05:52:38 compute-0 podman[263798]: 2025-11-22 05:52:38.01051929 +0000 UTC m=+0.165924469 container attach 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 05:52:38 compute-0 podman[263798]: 2025-11-22 05:52:38.011064246 +0000 UTC m=+0.166469435 container died 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:52:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed19e8f760a4618bdbc7ba687d91b571451c801ca1f70d37688bd3403e1849d2-merged.mount: Deactivated successfully.
Nov 22 05:52:38 compute-0 podman[263798]: 2025-11-22 05:52:38.058377817 +0000 UTC m=+0.213782986 container remove 4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_rosalind, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:52:38 compute-0 systemd[1]: libpod-conmon-4c039a81e4336f6667d92573f65682b8fa778458acccf41528d46ca262956c19.scope: Deactivated successfully.
Nov 22 05:52:38 compute-0 podman[263837]: 2025-11-22 05:52:38.266539368 +0000 UTC m=+0.064323757 container create 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:52:38 compute-0 systemd[1]: Started libpod-conmon-5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271.scope.
Nov 22 05:52:38 compute-0 podman[263837]: 2025-11-22 05:52:38.239802339 +0000 UTC m=+0.037586778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:52:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec2e92e84c7b4814b22df30867542182ace7a1a9aecfd59a502368963263157/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec2e92e84c7b4814b22df30867542182ace7a1a9aecfd59a502368963263157/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec2e92e84c7b4814b22df30867542182ace7a1a9aecfd59a502368963263157/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec2e92e84c7b4814b22df30867542182ace7a1a9aecfd59a502368963263157/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:52:38 compute-0 podman[263837]: 2025-11-22 05:52:38.366168287 +0000 UTC m=+0.163952646 container init 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:52:38 compute-0 podman[263837]: 2025-11-22 05:52:38.380129038 +0000 UTC m=+0.177913387 container start 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:52:38 compute-0 podman[263837]: 2025-11-22 05:52:38.383352627 +0000 UTC m=+0.181136966 container attach 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:52:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:52:38 compute-0 ceph-mon[75840]: pgmap v971: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]: {
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_id": 1,
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "type": "bluestore"
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     },
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_id": 2,
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "type": "bluestore"
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     },
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_id": 0,
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:         "type": "bluestore"
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]:     }
Nov 22 05:52:39 compute-0 interesting_dubinsky[263853]: }
Nov 22 05:52:39 compute-0 systemd[1]: libpod-5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271.scope: Deactivated successfully.
Nov 22 05:52:39 compute-0 podman[263837]: 2025-11-22 05:52:39.490118915 +0000 UTC m=+1.287903274 container died 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:52:39 compute-0 systemd[1]: libpod-5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271.scope: Consumed 1.117s CPU time.
Nov 22 05:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ec2e92e84c7b4814b22df30867542182ace7a1a9aecfd59a502368963263157-merged.mount: Deactivated successfully.
Nov 22 05:52:39 compute-0 podman[263837]: 2025-11-22 05:52:39.56645941 +0000 UTC m=+1.364243749 container remove 5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:52:39 compute-0 systemd[1]: libpod-conmon-5079916a72daa68a5a0965d4e11b3e5c28d593da116820f814ed7ac44f1a1271.scope: Deactivated successfully.
Nov 22 05:52:39 compute-0 sudo[263733]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:52:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:52:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 069c7ed0-a98a-4652-8e04-0e4374cca381 does not exist
Nov 22 05:52:39 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 48ecfabb-a7a4-4149-9d30-c16b2e612c33 does not exist
Nov 22 05:52:39 compute-0 sudo[263900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:52:39 compute-0 sudo[263900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:39 compute-0 sudo[263900]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:39 compute-0 sudo[263925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:52:39 compute-0 sudo[263925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:52:39 compute-0 sudo[263925]: pam_unix(sudo:session): session closed for user root
Nov 22 05:52:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:52:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:52:41 compute-0 ceph-mon[75840]: pgmap v972: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "format": "json"}]: dispatch
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:41.720+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5' of type subvolume
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5' of type subvolume
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5'' moved to trashcan
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5, vol_name:cephfs) < ""
Nov 22 05:52:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "format": "json"}]: dispatch
Nov 22 05:52:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7eeb2eb3-08e3-495c-afeb-dca50ab5e8a5", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 13 KiB/s wr, 2 op/s
Nov 22 05:52:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:43 compute-0 ceph-mon[75840]: pgmap v973: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 13 KiB/s wr, 2 op/s
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:52:43
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', '.mgr']
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:52:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:52:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 11 KiB/s wr, 2 op/s
Nov 22 05:52:45 compute-0 ceph-mon[75840]: pgmap v974: 321 pgs: 321 active+clean; 44 MiB data, 210 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 11 KiB/s wr, 2 op/s
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/.meta.tmp'
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/.meta.tmp' to config b'/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/.meta'
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 26 KiB/s wr, 3 op/s
Nov 22 05:52:46 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8a3e92b9-37c2-4b00-a3e4-980f0fe980b4/.meta.tmp'
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8a3e92b9-37c2-4b00-a3e4-980f0fe980b4/.meta.tmp' to config b'/volumes/_nogroup/8a3e92b9-37c2-4b00-a3e4-980f0fe980b4/.meta'
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "format": "json"}]: dispatch
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:52:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1042431848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:52:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1042431848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "format": "json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: pgmap v975: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 26 KiB/s wr, 3 op/s
Nov 22 05:52:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1042431848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1042431848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:52:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 21 KiB/s wr, 3 op/s
Nov 22 05:52:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 podman[263950]: 2025-11-22 05:52:49.302837321 +0000 UTC m=+0.151647131 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c16ce083-3588-49bf-a148-78d666432c7e/.meta.tmp'
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c16ce083-3588-49bf-a148-78d666432c7e/.meta.tmp' to config b'/volumes/_nogroup/c16ce083-3588-49bf-a148-78d666432c7e/.meta'
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:52:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:52:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:52:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:52:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:52:49 compute-0 ceph-mon[75840]: pgmap v976: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 21 KiB/s wr, 3 op/s
Nov 22 05:52:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:52:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s
Nov 22 05:52:50 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:52:50 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "format": "json"}]: dispatch
Nov 22 05:52:50 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:52:51 compute-0 ceph-mon[75840]: pgmap v977: 321 pgs: 321 active+clean; 44 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 21 KiB/s wr, 2 op/s
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "format": "json"}]: dispatch
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:52:52 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:52:52.638+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8a3e92b9-37c2-4b00-a3e4-980f0fe980b4' of type subvolume
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8a3e92b9-37c2-4b00-a3e4-980f0fe980b4' of type subvolume
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8a3e92b9-37c2-4b00-a3e4-980f0fe980b4'' moved to trashcan
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8a3e92b9-37c2-4b00-a3e4-980f0fe980b4, vol_name:cephfs) < ""
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 5 op/s
Nov 22 05:52:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "format": "json"}]: dispatch
Nov 22 05:52:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8a3e92b9-37c2-4b00-a3e4-980f0fe980b4", "force": true, "format": "json"}]: dispatch
Nov 22 05:52:52 compute-0 ceph-mon[75840]: pgmap v978: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 5 op/s
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.246233395194374e-05 of space, bias 4.0, pg target 0.06295480074233249 quantized to 16 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:52:52 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:52:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:52:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:52:53 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:52:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:52:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:52:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:52:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:52:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 4 op/s
Nov 22 05:52:54 compute-0 ceph-mon[75840]: pgmap v979: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 4 op/s
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.932685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790774932891, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1558, "num_deletes": 257, "total_data_size": 1951618, "memory_usage": 1992584, "flush_reason": "Manual Compaction"}
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790774948147, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1916651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19628, "largest_seqno": 21185, "table_properties": {"data_size": 1909347, "index_size": 4122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17431, "raw_average_key_size": 21, "raw_value_size": 1893931, "raw_average_value_size": 2295, "num_data_blocks": 184, "num_entries": 825, "num_filter_entries": 825, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790672, "oldest_key_time": 1763790672, "file_creation_time": 1763790774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 15405 microseconds, and 7099 cpu microseconds.
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.948201) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1916651 bytes OK
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.948220) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.950513) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.950527) EVENT_LOG_v1 {"time_micros": 1763790774950523, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.950545) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1944379, prev total WAL file size 1944379, number of live WAL files 2.
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.951293) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1871KB)], [47(7121KB)]
Nov 22 05:52:54 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790774951380, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9208846, "oldest_snapshot_seqno": -1}
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4492 keys, 7432697 bytes, temperature: kUnknown
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790775011081, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7432697, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7401638, "index_size": 18725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111494, "raw_average_key_size": 24, "raw_value_size": 7319407, "raw_average_value_size": 1629, "num_data_blocks": 781, "num_entries": 4492, "num_filter_entries": 4492, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790774, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.011344) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7432697 bytes
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.012608) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.0 rd, 124.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 5019, records dropped: 527 output_compression: NoCompression
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.012626) EVENT_LOG_v1 {"time_micros": 1763790775012616, "job": 24, "event": "compaction_finished", "compaction_time_micros": 59788, "compaction_time_cpu_micros": 18283, "output_level": 6, "num_output_files": 1, "total_output_size": 7432697, "num_input_records": 5019, "num_output_records": 4492, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790775013022, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790775014420, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:54.951178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.014569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.014573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.014575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.014576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:55 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:52:55.014578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:52:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 6 op/s
Nov 22 05:52:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:52:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:52:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:52:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:57 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:52:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:52:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:52:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:52:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:52:57 compute-0 ceph-mon[75840]: pgmap v980: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 57 KiB/s wr, 6 op/s
Nov 22 05:52:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:52:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:52:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:52:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 42 KiB/s wr, 6 op/s
Nov 22 05:52:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:52:59 compute-0 ceph-mon[75840]: pgmap v981: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 42 KiB/s wr, 6 op/s
Nov 22 05:53:00 compute-0 podman[263977]: 2025-11-22 05:53:00.226684923 +0000 UTC m=+0.084074636 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 05:53:00 compute-0 podman[263978]: 2025-11-22 05:53:00.24564162 +0000 UTC m=+0.091938820 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:53:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 5 op/s
Nov 22 05:53:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:01 compute-0 ceph-mon[75840]: pgmap v982: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 42 KiB/s wr, 5 op/s
Nov 22 05:53:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:53:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:53:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:01 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 70 KiB/s wr, 8 op/s
Nov 22 05:53:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:03 compute-0 ceph-mon[75840]: pgmap v983: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 70 KiB/s wr, 8 op/s
Nov 22 05:53:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:04 compute-0 rsyslogd[1005]: imjournal: 1455 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 05:53:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 05:53:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:05 compute-0 ceph-mon[75840]: pgmap v984: 321 pgs: 321 active+clean; 45 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c16ce083-3588-49bf-a148-78d666432c7e", "format": "json"}]: dispatch
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c16ce083-3588-49bf-a148-78d666432c7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c16ce083-3588-49bf-a148-78d666432c7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:06 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:06.333+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c16ce083-3588-49bf-a148-78d666432c7e' of type subvolume
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c16ce083-3588-49bf-a148-78d666432c7e' of type subvolume
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c16ce083-3588-49bf-a148-78d666432c7e'' moved to trashcan
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c16ce083-3588-49bf-a148-78d666432c7e, vol_name:cephfs) < ""
Nov 22 05:53:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 70 KiB/s wr, 8 op/s
Nov 22 05:53:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c16ce083-3588-49bf-a148-78d666432c7e", "format": "json"}]: dispatch
Nov 22 05:53:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c16ce083-3588-49bf-a148-78d666432c7e", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:07 compute-0 ceph-mon[75840]: pgmap v985: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 70 KiB/s wr, 8 op/s
Nov 22 05:53:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:07 compute-0 sshd-session[264017]: Invalid user ethereum from 80.94.92.166 port 43886
Nov 22 05:53:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:53:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:08 compute-0 sshd-session[264017]: Connection closed by invalid user ethereum 80.94.92.166 port 43886 [preauth]
Nov 22 05:53:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 05:53:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta.tmp'
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta.tmp' to config b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57/.meta'
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:53:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:53:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:09 compute-0 ceph-mon[75840]: pgmap v986: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 05:53:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 05:53:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:11 compute-0 ceph-mon[75840]: pgmap v987: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 05:53:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 05:53:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta.tmp'
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta.tmp' to config b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae/.meta'
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:13 compute-0 ceph-mon[75840]: pgmap v988: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 05:53:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 05:53:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 42 KiB/s wr, 5 op/s
Nov 22 05:53:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:14 compute-0 ceph-mon[75840]: pgmap v989: 321 pgs: 321 active+clean; 45 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 42 KiB/s wr, 5 op/s
Nov 22 05:53:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.177 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:53:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:15 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:15 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:15.476 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:53:15 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:15.477 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:53:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:53:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905754022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.632 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.840 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.842 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.843 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.843 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.908 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.908 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:53:15 compute-0 nova_compute[255660]: 2025-11-22 05:53:15.932 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2905754022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:53:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:53:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551900576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:53:16 compute-0 nova_compute[255660]: 2025-11-22 05:53:16.468 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:53:16 compute-0 nova_compute[255660]: 2025-11-22 05:53:16.475 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:53:16 compute-0 nova_compute[255660]: 2025-11-22 05:53:16.494 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:53:16 compute-0 nova_compute[255660]: 2025-11-22 05:53:16.497 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:53:16 compute-0 nova_compute[255660]: 2025-11-22 05:53:16.498 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:53:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 05:53:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2551900576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:53:16 compute-0 ceph-mon[75840]: pgmap v990: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 71 KiB/s wr, 9 op/s
Nov 22 05:53:17 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:17.479 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:53:17 compute-0 nova_compute[255660]: 2025-11-22 05:53:17.499 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:17 compute-0 nova_compute[255660]: 2025-11-22 05:53:17.499 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:17 compute-0 nova_compute[255660]: 2025-11-22 05:53:17.500 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:17 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:17.549+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb542e3b-52e7-44e3-82c7-2e32e58f04ae' of type subvolume
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bb542e3b-52e7-44e3-82c7-2e32e58f04ae' of type subvolume
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bb542e3b-52e7-44e3-82c7-2e32e58f04ae'' moved to trashcan
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bb542e3b-52e7-44e3-82c7-2e32e58f04ae, vol_name:cephfs) < ""
Nov 22 05:53:17 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 05:53:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bb542e3b-52e7-44e3-82c7-2e32e58f04ae", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 nova_compute[255660]: 2025-11-22 05:53:18.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:18 compute-0 nova_compute[255660]: 2025-11-22 05:53:18.140 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:53:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 48 KiB/s wr, 7 op/s
Nov 22 05:53:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "format": "json"}]: dispatch
Nov 22 05:53:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:19 compute-0 ceph-mon[75840]: pgmap v991: 321 pgs: 321 active+clean; 46 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 48 KiB/s wr, 7 op/s
Nov 22 05:53:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:19 compute-0 nova_compute[255660]: 2025-11-22 05:53:19.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:19 compute-0 nova_compute[255660]: 2025-11-22 05:53:19.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:20 compute-0 nova_compute[255660]: 2025-11-22 05:53:20.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:20 compute-0 nova_compute[255660]: 2025-11-22 05:53:20.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:53:20 compute-0 nova_compute[255660]: 2025-11-22 05:53:20.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:53:20 compute-0 nova_compute[255660]: 2025-11-22 05:53:20.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:53:20 compute-0 podman[264066]: 2025-11-22 05:53:20.234282971 +0000 UTC m=+0.091925899 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 05:53:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 05:53:20 compute-0 ceph-mon[75840]: pgmap v992: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 05:53:21 compute-0 nova_compute[255660]: 2025-11-22 05:53:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:21 compute-0 nova_compute[255660]: 2025-11-22 05:53:21.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:53:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 9 op/s
Nov 22 05:53:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta.tmp'
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta.tmp' to config b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d/.meta'
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:23 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "target_sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:53:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, target_sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:53:23 compute-0 ceph-mon[75840]: pgmap v993: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 9 op/s
Nov 22 05:53:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 05:53:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 05:53:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 76eca804-5302-4f65-9ec9-887e332e0764 for path b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101'
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, target_sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.068+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101)
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:24.168+0000 7f533eb6b640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101) -- by 0 seconds
Nov 22 05:53:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "target_sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:53:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:53:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:53:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:53:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:25 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:53:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:53:25 compute-0 ceph-mon[75840]: pgmap v994: 321 pgs: 321 active+clean; 46 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:53:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:53:25 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.mscchl(active, since 28m)
Nov 22 05:53:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:26 compute-0 ceph-mon[75840]: mgrmap e12: compute-0.mscchl(active, since 28m)
Nov 22 05:53:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 94 KiB/s wr, 10 op/s
Nov 22 05:53:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:53:27 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:27 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.snap/ab580b6a-b19b-46ad-8a5e-1d8d79733bf6/fe3882fc-5c1d-4277-ae52-5cdb0f8dabd4' to b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/5706d2b5-6899-48ca-9951-7383c7ee3e88'
Nov 22 05:53:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:27 compute-0 ceph-mon[75840]: pgmap v995: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 94 KiB/s wr, 10 op/s
Nov 22 05:53:27 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 65 KiB/s wr, 8 op/s
Nov 22 05:53:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:29 compute-0 ceph-mon[75840]: pgmap v996: 321 pgs: 321 active+clean; 47 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 65 KiB/s wr, 8 op/s
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:29.536+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ea6c199-1cc2-4500-b5d2-1d98c6523e3d' of type subvolume
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4ea6c199-1cc2-4500-b5d2-1d98c6523e3d' of type subvolume
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 76eca804-5302-4f65-9ec9-887e332e0764
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta.tmp' to config b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101/.meta'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 2d92936c-d826-4675-9b10-c118c0461101)
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4ea6c199-1cc2-4500-b5d2-1d98c6523e3d'' moved to trashcan
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4ea6c199-1cc2-4500-b5d2-1d98c6523e3d, vol_name:cephfs) < ""
Nov 22 05:53:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "format": "json"}]: dispatch
Nov 22 05:53:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4ea6c199-1cc2-4500-b5d2-1d98c6523e3d", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 05:53:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta.tmp'
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta.tmp' to config b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f/.meta'
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:31 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:31 compute-0 podman[264119]: 2025-11-22 05:53:31.233663456 +0000 UTC m=+0.087354275 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 05:53:31 compute-0 podman[264120]: 2025-11-22 05:53:31.236705899 +0000 UTC m=+0.086622006 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:53:31 compute-0 ceph-mon[75840]: pgmap v997: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 05:53:31 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 05:53:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 67 KiB/s wr, 10 op/s
Nov 22 05:53:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:53:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:53:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:33 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:33 compute-0 ceph-mon[75840]: pgmap v998: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 67 KiB/s wr, 10 op/s
Nov 22 05:53:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:53:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:53:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 05:53:34 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:34 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:35 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:35.005+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f' of type subvolume
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f' of type subvolume
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f'' moved to trashcan
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f, vol_name:cephfs) < ""
Nov 22 05:53:35 compute-0 ceph-mon[75840]: pgmap v999: 321 pgs: 321 active+clean; 47 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 05:53:36 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:53:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 86 KiB/s wr, 11 op/s
Nov 22 05:53:36 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8c5801aa-d1a9-4d9f-9257-1e5c7cd7f67f", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: pgmap v1000: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 86 KiB/s wr, 11 op/s
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:36 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:53:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.933 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:53:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:53:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:53:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta.tmp'
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta.tmp' to config b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10/.meta'
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:38 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:38 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 56 KiB/s wr, 9 op/s
Nov 22 05:53:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 05:53:39 compute-0 ceph-mon[75840]: pgmap v1001: 321 pgs: 321 active+clean; 47 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 56 KiB/s wr, 9 op/s
Nov 22 05:53:39 compute-0 sudo[264158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:39 compute-0 sudo[264158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:39 compute-0 sudo[264158]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:39 compute-0 sudo[264183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:53:39 compute-0 sudo[264183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:39 compute-0 sudo[264183]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:40 compute-0 sudo[264208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:40 compute-0 sudo[264208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:40 compute-0 sudo[264208]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:40 compute-0 sudo[264233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:53:40 compute-0 sudo[264233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:40 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:40 compute-0 sudo[264233]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 87 KiB/s wr, 10 op/s
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 2ce028e8-de6a-4700-9956-08f41103f333 does not exist
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e2dfa968-e428-40d6-9c91-55e6faa1eb1a does not exist
Nov 22 05:53:40 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev d374078e-d5dc-4dba-a3cb-edbe39f5dd75 does not exist
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:53:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:53:40 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:53:40 compute-0 sudo[264289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:40 compute-0 sudo[264289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:40 compute-0 sudo[264289]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:40 compute-0 sudo[264314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:53:40 compute-0 sudo[264314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:40 compute-0 sudo[264314]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:41 compute-0 sudo[264339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:41 compute-0 sudo[264339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:41 compute-0 sudo[264339]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:41 compute-0 sudo[264364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:53:41 compute-0 sudo[264364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.629713522 +0000 UTC m=+0.107276460 container create fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.549007059 +0000 UTC m=+0.026569967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: pgmap v1002: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 87 KiB/s wr, 10 op/s
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:53:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:53:41 compute-0 systemd[1]: Started libpod-conmon-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope.
Nov 22 05:53:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.806022854 +0000 UTC m=+0.283585792 container init fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.819032999 +0000 UTC m=+0.296595937 container start fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:53:41 compute-0 relaxed_galois[264446]: 167 167
Nov 22 05:53:41 compute-0 systemd[1]: libpod-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope: Deactivated successfully.
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.829268689 +0000 UTC m=+0.306831627 container attach fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:53:41 compute-0 podman[264430]: 2025-11-22 05:53:41.83151264 +0000 UTC m=+0.309075578 container died fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:53:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5315e0c596324c1869a68c70fc3dc8a2a276ee51ecc7e4b40dc9d23ccc187a47-merged.mount: Deactivated successfully.
Nov 22 05:53:42 compute-0 podman[264430]: 2025-11-22 05:53:42.006597528 +0000 UTC m=+0.484160466 container remove fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_galois, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:53:42 compute-0 systemd[1]: libpod-conmon-fba4f6a6a519e1530a0a17a90cb2265af04f7b61bf9d445046a40f4e0993afa6.scope: Deactivated successfully.
Nov 22 05:53:42 compute-0 podman[264471]: 2025-11-22 05:53:42.305087565 +0000 UTC m=+0.112299336 container create 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 05:53:42 compute-0 podman[264471]: 2025-11-22 05:53:42.225030071 +0000 UTC m=+0.032241902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:42 compute-0 systemd[1]: Started libpod-conmon-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope.
Nov 22 05:53:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:42 compute-0 podman[264471]: 2025-11-22 05:53:42.485302525 +0000 UTC m=+0.292514266 container init 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 22 05:53:42 compute-0 podman[264471]: 2025-11-22 05:53:42.49463997 +0000 UTC m=+0.301851751 container start 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:53:42 compute-0 podman[264471]: 2025-11-22 05:53:42.545668242 +0000 UTC m=+0.352879983 container attach 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:53:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 54 KiB/s wr, 9 op/s
Nov 22 05:53:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:43 compute-0 pensive_brown[264488]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:53:43 compute-0 pensive_brown[264488]: --> relative data size: 1.0
Nov 22 05:53:43 compute-0 pensive_brown[264488]: --> All data devices are unavailable
Nov 22 05:53:43 compute-0 systemd[1]: libpod-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Deactivated successfully.
Nov 22 05:53:43 compute-0 systemd[1]: libpod-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Consumed 1.103s CPU time.
Nov 22 05:53:43 compute-0 podman[264471]: 2025-11-22 05:53:43.646025776 +0000 UTC m=+1.453237557 container died 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:53:43
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:53:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-934e8eb6ebf0dd32b5a4c39276120a3af2de8c020d2b2c763caeecb190dd4df3-merged.mount: Deactivated successfully.
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:53:43 compute-0 ceph-mon[75840]: pgmap v1003: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 54 KiB/s wr, 9 op/s
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:53:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:53:44 compute-0 podman[264471]: 2025-11-22 05:53:44.184680149 +0000 UTC m=+1.991891930 container remove 584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:53:44 compute-0 systemd[1]: libpod-conmon-584565ca753a353c8219dee1a28a8ff6615052d05ec411afe9183b021805d598.scope: Deactivated successfully.
Nov 22 05:53:44 compute-0 sudo[264364]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:44 compute-0 sudo[264529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:44 compute-0 sudo[264529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:44 compute-0 sudo[264529]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:44 compute-0 sudo[264554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:53:44 compute-0 sudo[264554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:44 compute-0 sudo[264554]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:44 compute-0 sudo[264579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:44 compute-0 sudo[264579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:44 compute-0 sudo[264579]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:44 compute-0 sudo[264604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:53:44 compute-0 sudo[264604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:53:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:44 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 05:53:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:44 compute-0 podman[264670]: 2025-11-22 05:53:44.968331799 +0000 UTC m=+0.052915356 container create 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 05:53:45 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:45 compute-0 ceph-mon[75840]: pgmap v1004: 321 pgs: 321 active+clean; 47 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 52 KiB/s wr, 6 op/s
Nov 22 05:53:45 compute-0 systemd[1]: Started libpod-conmon-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope.
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:44.941236058 +0000 UTC m=+0.025819655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:45.070332322 +0000 UTC m=+0.154915959 container init 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:45.080132109 +0000 UTC m=+0.164715676 container start 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:45.084414746 +0000 UTC m=+0.168998333 container attach 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:53:45 compute-0 awesome_swartz[264687]: 167 167
Nov 22 05:53:45 compute-0 systemd[1]: libpod-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope: Deactivated successfully.
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:45.08857061 +0000 UTC m=+0.173154237 container died 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1deca70c3e43d069f3d73706731a86d10a6873642ea95859b718c2c061ce2e74-merged.mount: Deactivated successfully.
Nov 22 05:53:45 compute-0 podman[264670]: 2025-11-22 05:53:45.142499542 +0000 UTC m=+0.227083129 container remove 83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:53:45 compute-0 systemd[1]: libpod-conmon-83134f5188a39b2e25bd3cdf83266aa262437c43faabe6a409839e9e2d03699f.scope: Deactivated successfully.
Nov 22 05:53:45 compute-0 podman[264710]: 2025-11-22 05:53:45.301115591 +0000 UTC m=+0.024402457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:45 compute-0 podman[264710]: 2025-11-22 05:53:45.457100229 +0000 UTC m=+0.180387065 container create e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:53:45 compute-0 systemd[1]: Started libpod-conmon-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope.
Nov 22 05:53:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:45 compute-0 podman[264710]: 2025-11-22 05:53:45.571262195 +0000 UTC m=+0.294549001 container init e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cedb7eed-2602-4012-a237-08eac957da10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cedb7eed-2602-4012-a237-08eac957da10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:45.580+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cedb7eed-2602-4012-a237-08eac957da10' of type subvolume
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cedb7eed-2602-4012-a237-08eac957da10' of type subvolume
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:45 compute-0 podman[264710]: 2025-11-22 05:53:45.588795294 +0000 UTC m=+0.312082140 container start e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cedb7eed-2602-4012-a237-08eac957da10'' moved to trashcan
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cedb7eed-2602-4012-a237-08eac957da10, vol_name:cephfs) < ""
Nov 22 05:53:45 compute-0 podman[264710]: 2025-11-22 05:53:45.597580753 +0000 UTC m=+0.320867569 container attach e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]: {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     "0": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "devices": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "/dev/loop3"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             ],
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_name": "ceph_lv0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_size": "21470642176",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "name": "ceph_lv0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "tags": {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_name": "ceph",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.crush_device_class": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.encrypted": "0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_id": "0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.vdo": "0"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             },
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "vg_name": "ceph_vg0"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         }
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     ],
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     "1": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "devices": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "/dev/loop4"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             ],
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_name": "ceph_lv1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_size": "21470642176",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "name": "ceph_lv1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "tags": {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_name": "ceph",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.crush_device_class": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.encrypted": "0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_id": "1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.vdo": "0"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             },
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "vg_name": "ceph_vg1"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         }
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     ],
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     "2": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "devices": [
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "/dev/loop5"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             ],
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_name": "ceph_lv2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_size": "21470642176",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "name": "ceph_lv2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "tags": {
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.cluster_name": "ceph",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.crush_device_class": "",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.encrypted": "0",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osd_id": "2",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:                 "ceph.vdo": "0"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             },
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "type": "block",
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:             "vg_name": "ceph_vg2"
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:         }
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]:     ]
Nov 22 05:53:46 compute-0 sleepy_blackburn[264726]: }
Nov 22 05:53:46 compute-0 systemd[1]: libpod-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope: Deactivated successfully.
Nov 22 05:53:46 compute-0 podman[264710]: 2025-11-22 05:53:46.401241899 +0000 UTC m=+1.124528705 container died e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d03d0b8a564b906f4f8cfc03fa9efb24b63529b23836c31dfab926d722eb1c6-merged.mount: Deactivated successfully.
Nov 22 05:53:46 compute-0 podman[264710]: 2025-11-22 05:53:46.489696043 +0000 UTC m=+1.212982839 container remove e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 05:53:46 compute-0 systemd[1]: libpod-conmon-e740268d4a9fddf630614eeaebaf758c3ce37aac6ab67e633025eb515d4265ca.scope: Deactivated successfully.
Nov 22 05:53:46 compute-0 sudo[264604]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:46 compute-0 sudo[264749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:46 compute-0 sudo[264749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:46 compute-0 sudo[264749]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:46 compute-0 sudo[264774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:53:46 compute-0 sudo[264774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:46 compute-0 sudo[264774]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cedb7eed-2602-4012-a237-08eac957da10", "format": "json"}]: dispatch
Nov 22 05:53:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cedb7eed-2602-4012-a237-08eac957da10", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 05:53:46 compute-0 sudo[264799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:46 compute-0 sudo[264799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:46 compute-0 sudo[264799]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:46 compute-0 sudo[264824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:53:46 compute-0 sudo[264824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:53:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:53:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:53:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.251301411 +0000 UTC m=+0.048370861 container create 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:53:47 compute-0 systemd[1]: Started libpod-conmon-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope.
Nov 22 05:53:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.228850248 +0000 UTC m=+0.025919728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.336075645 +0000 UTC m=+0.133145135 container init 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.343786615 +0000 UTC m=+0.140856065 container start 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:53:47 compute-0 elegant_shockley[264905]: 167 167
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.34983643 +0000 UTC m=+0.146905900 container attach 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:53:47 compute-0 systemd[1]: libpod-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope: Deactivated successfully.
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.350244292 +0000 UTC m=+0.147313802 container died 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dba0aa61c19abd8c7b1dfc566daa619732413485a7080b0d1f002949c4aafffb-merged.mount: Deactivated successfully.
Nov 22 05:53:47 compute-0 podman[264888]: 2025-11-22 05:53:47.399043084 +0000 UTC m=+0.196112534 container remove 4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 05:53:47 compute-0 systemd[1]: libpod-conmon-4d46314fe7812a52af6b59ef4b9c346b629625deea673cb3a1bc5e7b7d195268.scope: Deactivated successfully.
Nov 22 05:53:47 compute-0 podman[264928]: 2025-11-22 05:53:47.578217044 +0000 UTC m=+0.048351051 container create 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:53:47 compute-0 systemd[1]: Started libpod-conmon-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope.
Nov 22 05:53:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:47 compute-0 podman[264928]: 2025-11-22 05:53:47.552647716 +0000 UTC m=+0.022781813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:53:47 compute-0 podman[264928]: 2025-11-22 05:53:47.661407945 +0000 UTC m=+0.131542002 container init 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:53:47 compute-0 podman[264928]: 2025-11-22 05:53:47.667984134 +0000 UTC m=+0.138118141 container start 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:53:47 compute-0 podman[264928]: 2025-11-22 05:53:47.671614513 +0000 UTC m=+0.141748520 container attach 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:53:47 compute-0 ceph-mon[75840]: pgmap v1005: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 82 KiB/s wr, 9 op/s
Nov 22 05:53:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:53:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1918865413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:53:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:53:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:48 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:48 compute-0 clever_spence[264944]: {
Nov 22 05:53:48 compute-0 clever_spence[264944]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_id": 1,
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "type": "bluestore"
Nov 22 05:53:48 compute-0 clever_spence[264944]:     },
Nov 22 05:53:48 compute-0 clever_spence[264944]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_id": 2,
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "type": "bluestore"
Nov 22 05:53:48 compute-0 clever_spence[264944]:     },
Nov 22 05:53:48 compute-0 clever_spence[264944]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_id": 0,
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:53:48 compute-0 clever_spence[264944]:         "type": "bluestore"
Nov 22 05:53:48 compute-0 clever_spence[264944]:     }
Nov 22 05:53:48 compute-0 clever_spence[264944]: }
Nov 22 05:53:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:53:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 63 KiB/s wr, 8 op/s
Nov 22 05:53:48 compute-0 systemd[1]: libpod-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Deactivated successfully.
Nov 22 05:53:48 compute-0 systemd[1]: libpod-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Consumed 1.083s CPU time.
Nov 22 05:53:48 compute-0 podman[264928]: 2025-11-22 05:53:48.754986223 +0000 UTC m=+1.225120250 container died 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a374dc7679c36dbb45200497ddd8e926672491876e952b65a40bcf5be0c991a5-merged.mount: Deactivated successfully.
Nov 22 05:53:48 compute-0 podman[264928]: 2025-11-22 05:53:48.834664508 +0000 UTC m=+1.304798535 container remove 7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 05:53:48 compute-0 systemd[1]: libpod-conmon-7b6f3312c984f3dad320ff71d5e5460a9390dd96fd8464ae20e6ed13a62dce42.scope: Deactivated successfully.
Nov 22 05:53:48 compute-0 sudo[264824]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:53:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:53:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1df39db0-85ea-4b9f-b213-a5a09958f86a does not exist
Nov 22 05:53:48 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3d972d76-7f1f-41df-bdbd-82bb7dd633ba does not exist
Nov 22 05:53:48 compute-0 sudo[264993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:53:48 compute-0 sudo[264993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:48 compute-0 sudo[264993]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:49 compute-0 sudo[265018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:53:49 compute-0 sudo[265018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:53:49 compute-0 sudo[265018]: pam_unix(sudo:session): session closed for user root
Nov 22 05:53:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:53:49 compute-0 ceph-mon[75840]: pgmap v1006: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 63 KiB/s wr, 8 op/s
Nov 22 05:53:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:49 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:53:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 8 op/s
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta.tmp'
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta.tmp' to config b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e/.meta'
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 podman[265044]: 2025-11-22 05:53:51.274718739 +0000 UTC m=+0.130652408 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 05:53:51 compute-0 ceph-mon[75840]: pgmap v1007: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 80 KiB/s wr, 8 op/s
Nov 22 05:53:51 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:51 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 49 KiB/s wr, 7 op/s
Nov 22 05:53:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 05:53:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:52 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00010924883603568405 of space, bias 4.0, pg target 0.13109860324282085 quantized to 16 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:53:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:53:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:53:53 compute-0 ceph-mon[75840]: pgmap v1008: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 49 KiB/s wr, 7 op/s
Nov 22 05:53:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:53:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:53:55 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:53:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:53:55 compute-0 ceph-mon[75840]: pgmap v1009: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 48 KiB/s wr, 5 op/s
Nov 22 05:53:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:53:55 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:53:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 05:53:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:53:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4847 writes, 21K keys, 4847 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4847 writes, 4847 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1497 writes, 6905 keys, 1497 commit groups, 1.0 writes per commit group, ingest: 9.50 MB, 0.02 MB/s
                                           Interval WAL: 1497 writes, 1497 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.23              0.10        12    0.019       0      0       0.0       0.0
                                             L6      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    138.5    113.8      0.68              0.29        11    0.061     48K   5784       0.0       0.0
                                            Sum      1/0    7.09 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    103.7    111.6      0.90              0.39        23    0.039     48K   5784       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.2     96.5     97.3      0.46              0.18        10    0.046     23K   2598       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    138.5    113.8      0.68              0.29        11    0.061     48K   5784       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    109.4      0.22              0.10        11    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.023, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 304.00 MB usage: 8.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.00012 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(550,8.17 MB,2.68871%) FilterBlock(24,141.98 KB,0.0456107%) IndexBlock(24,270.95 KB,0.0870403%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81a5c3f3-2894-44cc-9d89-89c13467813e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81a5c3f3-2894-44cc-9d89-89c13467813e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81a5c3f3-2894-44cc-9d89-89c13467813e' of type subvolume
Nov 22 05:53:57 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:53:57.055+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81a5c3f3-2894-44cc-9d89-89c13467813e' of type subvolume
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81a5c3f3-2894-44cc-9d89-89c13467813e'' moved to trashcan
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81a5c3f3-2894-44cc-9d89-89c13467813e, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mon[75840]: pgmap v1010: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:53:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:53:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:53:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 6 op/s
Nov 22 05:53:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "format": "json"}]: dispatch
Nov 22 05:53:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81a5c3f3-2894-44cc-9d89-89c13467813e", "force": true, "format": "json"}]: dispatch
Nov 22 05:53:58 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:53:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:53:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:53:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:53:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:53:59 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mon[75840]: pgmap v1011: 321 pgs: 321 active+clean; 48 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 44 KiB/s wr, 6 op/s
Nov 22 05:53:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:53:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:54:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:01 compute-0 ceph-mon[75840]: pgmap v1012: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:54:02 compute-0 podman[265071]: 2025-11-22 05:54:02.228748464 +0000 UTC m=+0.079033738 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 05:54:02 compute-0 podman[265072]: 2025-11-22 05:54:02.241815861 +0000 UTC m=+0.088128716 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:02.611+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6eab6156-d31f-4c5e-8b3f-a70a75baac57' of type subvolume
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6eab6156-d31f-4c5e-8b3f-a70a75baac57' of type subvolume
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6eab6156-d31f-4c5e-8b3f-a70a75baac57'' moved to trashcan
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6eab6156-d31f-4c5e-8b3f-a70a75baac57, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 05:54:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6eab6156-d31f-4c5e-8b3f-a70a75baac57", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "format": "json"}]: dispatch
Nov 22 05:54:02 compute-0 ceph-mon[75840]: pgmap v1013: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 9 op/s
Nov 22 05:54:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:54:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:54:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:03 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:03 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:03 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:03 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 7 op/s
Nov 22 05:54:04 compute-0 ceph-mon[75840]: pgmap v1014: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 66 KiB/s wr, 7 op/s
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp'
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta.tmp' to config b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98/.meta'
Nov 22 05:54:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ace986ad-e44f-45e0-bae4-482714700fcb, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb_e075958d-e50e-4903-9677-98e9d6e8b448", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "snap_name": "ace986ad-e44f-45e0-bae4-482714700fcb", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 89 KiB/s wr, 10 op/s
Nov 22 05:54:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mon[75840]: pgmap v1015: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 89 KiB/s wr, 10 op/s
Nov 22 05:54:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 22 05:54:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 22 05:54:07 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 22 05:54:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2d92936c-d826-4675-9b10-c118c0461101, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2d92936c-d826-4675-9b10-c118c0461101'' moved to trashcan
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2d92936c-d826-4675-9b10-c118c0461101, vol_name:cephfs) < ""
Nov 22 05:54:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 05:54:08 compute-0 ceph-mon[75840]: osdmap e145: 3 total, 3 up, 3 in
Nov 22 05:54:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2d92936c-d826-4675-9b10-c118c0461101", "format": "json"}]: dispatch
Nov 22 05:54:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2d92936c-d826-4675-9b10-c118c0461101", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:08 compute-0 ceph-mon[75840]: pgmap v1017: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39fe5e7b-616f-4319-8856-cfe7e482fa98' of type subvolume
Nov 22 05:54:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:09.245+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39fe5e7b-616f-4319-8856-cfe7e482fa98' of type subvolume
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/39fe5e7b-616f-4319-8856-cfe7e482fa98'' moved to trashcan
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39fe5e7b-616f-4319-8856-cfe7e482fa98, vol_name:cephfs) < ""
Nov 22 05:54:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "format": "json"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39fe5e7b-616f-4319-8856-cfe7e482fa98", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:10 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:54:10 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:10 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 05:54:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mon[75840]: pgmap v1018: 321 pgs: 321 active+clean; 49 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 67 KiB/s wr, 7 op/s
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:11 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp'
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta.tmp' to config b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89/.meta'
Nov 22 05:54:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ab580b6a-b19b-46ad-8a5e-1d8d79733bf6, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6_b4b7b05d-0976-4cf3-a526-f3a3648db0ed", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "snap_name": "ab580b6a-b19b-46ad-8a5e-1d8d79733bf6", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:54:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:13 compute-0 ceph-mon[75840]: pgmap v1019: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:13 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta.tmp'
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta.tmp' to config b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2/.meta'
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.165 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.166 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:15.385+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '576c395a-0c7b-4d45-a49a-9d0c63369a89' of type subvolume
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '576c395a-0c7b-4d45-a49a-9d0c63369a89' of type subvolume
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/576c395a-0c7b-4d45-a49a-9d0c63369a89'' moved to trashcan
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:576c395a-0c7b-4d45-a49a-9d0c63369a89, vol_name:cephfs) < ""
Nov 22 05:54:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mon[75840]: pgmap v1020: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:54:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:54:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023680856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.656 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.849 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.850 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.851 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.851 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.985 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:54:15 compute-0 nova_compute[255660]: 2025-11-22 05:54:15.985 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.015 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:54:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:54:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278697648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.474 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.480 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.522 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.525 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:54:16 compute-0 nova_compute[255660]: 2025-11-22 05:54:16.526 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "576c395a-0c7b-4d45-a49a-9d0c63369a89", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3023680856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3278697648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:54:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 10 op/s
Nov 22 05:54:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 22 05:54:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 22 05:54:17 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 22 05:54:17 compute-0 ceph-mon[75840]: pgmap v1021: 321 pgs: 321 active+clean; 50 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 10 op/s
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 22 05:54:18 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:54:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:18 compute-0 nova_compute[255660]: 2025-11-22 05:54:18.528 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:18 compute-0 nova_compute[255660]: 2025-11-22 05:54:18.528 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:18 compute-0 nova_compute[255660]: 2025-11-22 05:54:18.529 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:18 compute-0 nova_compute[255660]: 2025-11-22 05:54:18.529 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:54:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:18 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 97 KiB/s wr, 13 op/s
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta.tmp'
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta.tmp' to config b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0/.meta'
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mon[75840]: osdmap e146: 3 total, 3 up, 3 in
Nov 22 05:54:18 compute-0 ceph-mon[75840]: osdmap e147: 3 total, 3 up, 3 in
Nov 22 05:54:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:19 compute-0 ceph-mon[75840]: pgmap v1024: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 97 KiB/s wr, 13 op/s
Nov 22 05:54:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 05:54:20 compute-0 nova_compute[255660]: 2025-11-22 05:54:20.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 46 KiB/s wr, 6 op/s
Nov 22 05:54:20 compute-0 ceph-mon[75840]: pgmap v1025: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 46 KiB/s wr, 6 op/s
Nov 22 05:54:21 compute-0 nova_compute[255660]: 2025-11-22 05:54:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:54:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:21 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:21 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:22 compute-0 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:22 compute-0 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:54:22 compute-0 nova_compute[255660]: 2025-11-22 05:54:22.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:54:22 compute-0 nova_compute[255660]: 2025-11-22 05:54:22.156 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:54:22 compute-0 podman[265158]: 2025-11-22 05:54:22.254714705 +0000 UTC m=+0.108342268 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48201a49-7fb5-455c-9d81-35b89fbf42a0' of type subvolume
Nov 22 05:54:22 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:22.635+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48201a49-7fb5-455c-9d81-35b89fbf42a0' of type subvolume
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48201a49-7fb5-455c-9d81-35b89fbf42a0'' moved to trashcan
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48201a49-7fb5-455c-9d81-35b89fbf42a0, vol_name:cephfs) < ""
Nov 22 05:54:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 99 KiB/s wr, 12 op/s
Nov 22 05:54:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "format": "json"}]: dispatch
Nov 22 05:54:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48201a49-7fb5-455c-9d81-35b89fbf42a0", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:22 compute-0 ceph-mon[75840]: pgmap v1026: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 99 KiB/s wr, 12 op/s
Nov 22 05:54:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 22 05:54:23 compute-0 nova_compute[255660]: 2025-11-22 05:54:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:23 compute-0 nova_compute[255660]: 2025-11-22 05:54:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:54:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 22 05:54:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 22 05:54:24 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:24.169 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:54:24 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:24.170 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:54:24 compute-0 ceph-mon[75840]: osdmap e148: 3 total, 3 up, 3 in
Nov 22 05:54:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 61 KiB/s wr, 7 op/s
Nov 22 05:54:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:54:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:54:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:54:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:25 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:25 compute-0 ceph-mon[75840]: pgmap v1028: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 292 B/s rd, 61 KiB/s wr, 7 op/s
Nov 22 05:54:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:54:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:54:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:26 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:26.595+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c16f3c1-b6c6-4461-9394-db28e06b71e2' of type subvolume
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9c16f3c1-b6c6-4461-9394-db28e06b71e2' of type subvolume
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9c16f3c1-b6c6-4461-9394-db28e06b71e2'' moved to trashcan
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9c16f3c1-b6c6-4461-9394-db28e06b71e2, vol_name:cephfs) < ""
Nov 22 05:54:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 238 B/s rd, 50 KiB/s wr, 6 op/s
Nov 22 05:54:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "format": "json"}]: dispatch
Nov 22 05:54:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9c16f3c1-b6c6-4461-9394-db28e06b71e2", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:27 compute-0 ceph-mon[75840]: pgmap v1029: 321 pgs: 321 active+clean; 50 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 238 B/s rd, 50 KiB/s wr, 6 op/s
Nov 22 05:54:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:54:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:54:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:28 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:28 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:28 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:29 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:29 compute-0 ceph-mon[75840]: pgmap v1030: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:54:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:29 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta.tmp'
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta.tmp' to config b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e/.meta'
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:54:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:31 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:31 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 05:54:31 compute-0 ceph-mon[75840]: pgmap v1031: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:54:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:54:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:32 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 05:54:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:54:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:54:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:33 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:33.172 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:54:33 compute-0 podman[265186]: 2025-11-22 05:54:33.230407872 +0000 UTC m=+0.075640395 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 22 05:54:33 compute-0 podman[265187]: 2025-11-22 05:54:33.239603423 +0000 UTC m=+0.081369392 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 05:54:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:54:33 compute-0 ceph-mon[75840]: pgmap v1032: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:34.352+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd572fcc0-c0a8-4fe7-b2ef-39477199386e' of type subvolume
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd572fcc0-c0a8-4fe7-b2ef-39477199386e' of type subvolume
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d572fcc0-c0a8-4fe7-b2ef-39477199386e'' moved to trashcan
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d572fcc0-c0a8-4fe7-b2ef-39477199386e, vol_name:cephfs) < ""
Nov 22 05:54:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 353 B/s rd, 66 KiB/s wr, 6 op/s
Nov 22 05:54:34 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "format": "json"}]: dispatch
Nov 22 05:54:34 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d572fcc0-c0a8-4fe7-b2ef-39477199386e", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:34 compute-0 ceph-mon[75840]: pgmap v1033: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 353 B/s rd, 66 KiB/s wr, 6 op/s
Nov 22 05:54:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:54:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:54:36 compute-0 ceph-mon[75840]: pgmap v1034: 321 pgs: 321 active+clean; 51 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:54:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:54:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.934 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:54:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:54:36.935 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:54:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:54:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:54:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:39 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mon[75840]: pgmap v1035: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 99 KiB/s wr, 10 op/s
Nov 22 05:54:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta.tmp'
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta.tmp' to config b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/.meta'
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:39 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 05:54:40 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:40 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:40 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 05:54:41 compute-0 ceph-mon[75840]: pgmap v1036: 321 pgs: 321 active+clean; 52 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 05:54:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 10 op/s
Nov 22 05:54:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "tenant_id": "ff09e2486e9d4c72b3f5e832bcf1885a", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume authorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, tenant_id:ff09e2486e9d4c72b3f5e832bcf1885a, vol_name:cephfs) < ""
Nov 22 05:54:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"} v 0) v1
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1135923250 with tenant ff09e2486e9d4c72b3f5e832bcf1885a
Nov 22 05:54:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume authorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, tenant_id:ff09e2486e9d4c72b3f5e832bcf1885a, vol_name:cephfs) < ""
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:54:43
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['images', '.mgr', '.rgw.root', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b919d90>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536ba1da60>)]
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 05:54:43 compute-0 ceph-mon[75840]: pgmap v1037: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 10 op/s
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:43 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1135923250", "caps": ["mds", "allow rw path=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:54:43 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2223829226
Nov 22 05:54:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 6 op/s
Nov 22 05:54:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "tenant_id": "ff09e2486e9d4c72b3f5e832bcf1885a", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:44 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.mscchl(active, since 30m)
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume deauthorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"} v 0) v1
Nov 22 05:54:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"} v 0) v1
Nov 22 05:54:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]': finished
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume deauthorize, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume evict, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1135923250, client_metadata.root=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0
Nov 22 05:54:45 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1135923250,client_metadata.root=/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f/05d8faba-ecf7-45b8-97b6-351962e4dbc0],prefix=session evict} (starting...)
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1135923250, format:json, prefix:fs subvolume evict, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:54:45.399+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f' of type subvolume
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f' of type subvolume
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f'' moved to trashcan
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:54:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f, vol_name:cephfs) < ""
Nov 22 05:54:45 compute-0 ceph-mon[75840]: pgmap v1038: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 6 op/s
Nov 22 05:54:45 compute-0 ceph-mon[75840]: mgrmap e13: compute-0.mscchl(active, since 30m)
Nov 22 05:54:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]: dispatch
Nov 22 05:54:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1135923250"}]': finished
Nov 22 05:54:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 7 op/s
Nov 22 05:54:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "auth_id": "tempest-cephx-id-1135923250", "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df2fb1a3-9bc8-42f7-abfc-affd30fb7b3f", "force": true, "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:54:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:54:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:47 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:54:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:54:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: pgmap v1039: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 58 KiB/s wr, 7 op/s
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:54:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2237340005' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta.tmp'
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta.tmp' to config b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/.meta'
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:54:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 116 KiB/s wr, 12 op/s
Nov 22 05:54:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:48 compute-0 ceph-mon[75840]: pgmap v1040: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 116 KiB/s wr, 12 op/s
Nov 22 05:54:49 compute-0 sudo[265226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:49 compute-0 sudo[265226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:49 compute-0 sudo[265226]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:49 compute-0 sudo[265251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:54:49 compute-0 sudo[265251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:49 compute-0 sudo[265251]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:49 compute-0 sudo[265276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:49 compute-0 sudo[265276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:49 compute-0 sudo[265276]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:49 compute-0 sudo[265301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:54:49 compute-0 sudo[265301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:49 compute-0 sudo[265301]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:54:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:54:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:54:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:54:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:54:49 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:49 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev fc62f115-94dc-49d9-a8cc-444a22989858 does not exist
Nov 22 05:54:49 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4ae354cc-d207-4de3-a59b-52e5be30f005 does not exist
Nov 22 05:54:49 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1179f9ca-e9f1-4f9a-a3d9-fcc977ad9fda does not exist
Nov 22 05:54:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:54:50 compute-0 sudo[265358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:50 compute-0 sudo[265358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:50 compute-0 sudo[265358]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:50 compute-0 sudo[265383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:54:50 compute-0 sudo[265383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:50 compute-0 sudo[265383]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:50 compute-0 sudo[265408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:50 compute-0 sudo[265408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:50 compute-0 sudo[265408]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:50 compute-0 sudo[265433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:54:50 compute-0 sudo[265433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:50 compute-0 podman[265498]: 2025-11-22 05:54:50.677071544 +0000 UTC m=+0.072869881 container create deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:54:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:50 compute-0 podman[265498]: 2025-11-22 05:54:50.644174525 +0000 UTC m=+0.039972862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:50 compute-0 systemd[1]: Started libpod-conmon-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope.
Nov 22 05:54:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 80 KiB/s wr, 9 op/s
Nov 22 05:54:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:51 compute-0 podman[265498]: 2025-11-22 05:54:51.015823698 +0000 UTC m=+0.411622065 container init deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:54:51 compute-0 podman[265498]: 2025-11-22 05:54:51.025461452 +0000 UTC m=+0.421259819 container start deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 05:54:51 compute-0 romantic_lamarr[265515]: 167 167
Nov 22 05:54:51 compute-0 systemd[1]: libpod-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope: Deactivated successfully.
Nov 22 05:54:51 compute-0 podman[265498]: 2025-11-22 05:54:51.069120444 +0000 UTC m=+0.464918811 container attach deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:51 compute-0 ceph-mon[75840]: pgmap v1041: 321 pgs: 321 active+clean; 52 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 80 KiB/s wr, 9 op/s
Nov 22 05:54:51 compute-0 podman[265498]: 2025-11-22 05:54:51.069829843 +0000 UTC m=+0.465628210 container died deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:54:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b7890fd80b068d179889d38e6042c2e65d9595059b6d968274941080e2d01d7-merged.mount: Deactivated successfully.
Nov 22 05:54:51 compute-0 podman[265498]: 2025-11-22 05:54:51.214643495 +0000 UTC m=+0.610441862 container remove deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 22 05:54:51 compute-0 systemd[1]: libpod-conmon-deec95abe4cf9e7d3f09a074c3c622a1ffb2f183052696e7b183442d249d2f96.scope: Deactivated successfully.
Nov 22 05:54:51 compute-0 podman[265539]: 2025-11-22 05:54:51.416054483 +0000 UTC m=+0.031681195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:52 compute-0 podman[265539]: 2025-11-22 05:54:52.082288878 +0000 UTC m=+0.697915510 container create e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:54:52 compute-0 systemd[1]: Started libpod-conmon-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope.
Nov 22 05:54:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:52 compute-0 podman[265539]: 2025-11-22 05:54:52.481678898 +0000 UTC m=+1.097305520 container init e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 05:54:52 compute-0 podman[265539]: 2025-11-22 05:54:52.494167779 +0000 UTC m=+1.109794381 container start e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:54:52 compute-0 podman[265539]: 2025-11-22 05:54:52.522348758 +0000 UTC m=+1.137975390 container attach e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta.tmp'
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta.tmp' to config b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/.meta'
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:54:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:54:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00018543050400468843 of space, bias 4.0, pg target 0.2225166048056261 quantized to 16 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:54:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:54:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:53 compute-0 ceph-mon[75840]: pgmap v1042: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 22 05:54:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 05:54:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:54:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:53 compute-0 podman[265560]: 2025-11-22 05:54:53.274351045 +0000 UTC m=+0.122496565 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 22 05:54:53 compute-0 beautiful_noether[265555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:54:53 compute-0 beautiful_noether[265555]: --> relative data size: 1.0
Nov 22 05:54:53 compute-0 beautiful_noether[265555]: --> All data devices are unavailable
Nov 22 05:54:53 compute-0 systemd[1]: libpod-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Deactivated successfully.
Nov 22 05:54:53 compute-0 podman[265539]: 2025-11-22 05:54:53.688647152 +0000 UTC m=+2.304273784 container died e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 05:54:53 compute-0 systemd[1]: libpod-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Consumed 1.134s CPU time.
Nov 22 05:54:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-83fe22f10b5b3a0a31eec48a41bee4aea94d7286ae71b27e5469b9106d64032d-merged.mount: Deactivated successfully.
Nov 22 05:54:53 compute-0 podman[265539]: 2025-11-22 05:54:53.782286818 +0000 UTC m=+2.397913450 container remove e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 05:54:53 compute-0 systemd[1]: libpod-conmon-e60d9244cfd55d2d2413c496b49d4a65b46dec56f4b8f2c2c0804b2c665e3220.scope: Deactivated successfully.
Nov 22 05:54:53 compute-0 sudo[265433]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:53 compute-0 sudo[265622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:53 compute-0 sudo[265622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:53 compute-0 sudo[265622]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:53 compute-0 sudo[265647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:54:53 compute-0 sudo[265647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:53 compute-0 sudo[265647]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:54 compute-0 sudo[265672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:54 compute-0 sudo[265672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:54 compute-0 sudo[265672]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:54:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:54 compute-0 sudo[265697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:54:54 compute-0 sudo[265697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:54:54 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:54:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:54:54 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.535814505 +0000 UTC m=+0.045076321 container create 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:54:54 compute-0 systemd[1]: Started libpod-conmon-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope.
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.517113595 +0000 UTC m=+0.026375451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.638964341 +0000 UTC m=+0.148226237 container init 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.648112651 +0000 UTC m=+0.157374507 container start 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.652009677 +0000 UTC m=+0.161271563 container attach 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:54:54 compute-0 recursing_shockley[265780]: 167 167
Nov 22 05:54:54 compute-0 systemd[1]: libpod-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope: Deactivated successfully.
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.654355541 +0000 UTC m=+0.163617397 container died 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf037877dc01d3c25e94789a9fac8d76f5fe5b4a8a7055cf54c3a1ffcd6d796-merged.mount: Deactivated successfully.
Nov 22 05:54:54 compute-0 podman[265763]: 2025-11-22 05:54:54.703177334 +0000 UTC m=+0.212439200 container remove 680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 05:54:54 compute-0 systemd[1]: libpod-conmon-680349b047d7b09e0b5b9c15a992a2c16f98688c77a2ee0534fed3c775a91298.scope: Deactivated successfully.
Nov 22 05:54:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:54:54 compute-0 podman[265804]: 2025-11-22 05:54:54.916081885 +0000 UTC m=+0.061303024 container create eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:54:54 compute-0 systemd[1]: Started libpod-conmon-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope.
Nov 22 05:54:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:54 compute-0 podman[265804]: 2025-11-22 05:54:54.895858483 +0000 UTC m=+0.041079652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:55 compute-0 podman[265804]: 2025-11-22 05:54:55.037418677 +0000 UTC m=+0.182639826 container init eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 05:54:55 compute-0 podman[265804]: 2025-11-22 05:54:55.050765461 +0000 UTC m=+0.195986620 container start eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 05:54:55 compute-0 podman[265804]: 2025-11-22 05:54:55.057500265 +0000 UTC m=+0.202721404 container attach eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 05:54:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:54:55 compute-0 ceph-mon[75840]: pgmap v1043: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]: {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     "0": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "devices": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "/dev/loop3"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             ],
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_name": "ceph_lv0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_size": "21470642176",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "name": "ceph_lv0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "tags": {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_name": "ceph",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.crush_device_class": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.encrypted": "0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_id": "0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.vdo": "0"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             },
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "vg_name": "ceph_vg0"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         }
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     ],
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     "1": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "devices": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "/dev/loop4"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             ],
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_name": "ceph_lv1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_size": "21470642176",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "name": "ceph_lv1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "tags": {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_name": "ceph",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.crush_device_class": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.encrypted": "0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_id": "1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.vdo": "0"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             },
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "vg_name": "ceph_vg1"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         }
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     ],
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     "2": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "devices": [
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "/dev/loop5"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             ],
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_name": "ceph_lv2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_size": "21470642176",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "name": "ceph_lv2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "tags": {
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.cluster_name": "ceph",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.crush_device_class": "",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.encrypted": "0",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osd_id": "2",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:                 "ceph.vdo": "0"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             },
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "type": "block",
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:             "vg_name": "ceph_vg2"
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:         }
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]:     ]
Nov 22 05:54:55 compute-0 youthful_mcnulty[265821]: }
Nov 22 05:54:55 compute-0 systemd[1]: libpod-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope: Deactivated successfully.
Nov 22 05:54:55 compute-0 podman[265804]: 2025-11-22 05:54:55.894446109 +0000 UTC m=+1.039667248 container died eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 05:54:55 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:55 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:54:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a3ea3fac54bd50438632e59186be2d9e215e2b0487705fba85893123dc2df10-merged.mount: Deactivated successfully.
Nov 22 05:54:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 05:54:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:54:55 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID Joe with tenant 525ba1ccf0d546c7b4118a0855306190
Nov 22 05:54:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:55 compute-0 podman[265804]: 2025-11-22 05:54:55.977527777 +0000 UTC m=+1.122748916 container remove eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:54:55 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:55 compute-0 systemd[1]: libpod-conmon-eea099e22f9d758409cb9ba9b39ddecc4a2b281f1b18ee6950048bf7275d6578.scope: Deactivated successfully.
Nov 22 05:54:56 compute-0 sudo[265697]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:54:56 compute-0 sudo[265842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:56 compute-0 sudo[265842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:56 compute-0 sudo[265842]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:56 compute-0 sudo[265867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:54:56 compute-0 sudo[265867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:56 compute-0 sudo[265867]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:54:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:56 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:56 compute-0 sudo[265892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:56 compute-0 sudo[265892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:56 compute-0 sudo[265892]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:56 compute-0 sudo[265917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:54:56 compute-0 sudo[265917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.739585537 +0000 UTC m=+0.051278020 container create 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:54:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 84 KiB/s wr, 10 op/s
Nov 22 05:54:56 compute-0 systemd[1]: Started libpod-conmon-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope.
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.712820476 +0000 UTC m=+0.024513019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.844958223 +0000 UTC m=+0.156650766 container init 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.858985386 +0000 UTC m=+0.170677849 container start 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.862337197 +0000 UTC m=+0.174029750 container attach 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:54:56 compute-0 friendly_snyder[265998]: 167 167
Nov 22 05:54:56 compute-0 systemd[1]: libpod-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope: Deactivated successfully.
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.868726772 +0000 UTC m=+0.180419295 container died 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3da5f466677eaf521a76d3525617bcb7e593135bf9f6e1646ee20ed707b9ddef-merged.mount: Deactivated successfully.
Nov 22 05:54:56 compute-0 podman[265982]: 2025-11-22 05:54:56.923509667 +0000 UTC m=+0.235202130 container remove 3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 05:54:56 compute-0 systemd[1]: libpod-conmon-3a6414b689644499e35d7a133bc508e62baebf4e069ba64b2cd9bd5cc7becfb8.scope: Deactivated successfully.
Nov 22 05:54:57 compute-0 podman[266022]: 2025-11-22 05:54:57.180433859 +0000 UTC m=+0.075739308 container create 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:54:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:54:57 compute-0 ceph-mon[75840]: pgmap v1044: 321 pgs: 321 active+clean; 53 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 84 KiB/s wr, 10 op/s
Nov 22 05:54:57 compute-0 systemd[1]: Started libpod-conmon-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope.
Nov 22 05:54:57 compute-0 podman[266022]: 2025-11-22 05:54:57.150278617 +0000 UTC m=+0.045584136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:54:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:54:57 compute-0 podman[266022]: 2025-11-22 05:54:57.291786449 +0000 UTC m=+0.187091938 container init 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:54:57 compute-0 podman[266022]: 2025-11-22 05:54:57.308104054 +0000 UTC m=+0.203409463 container start 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:54:57 compute-0 podman[266022]: 2025-11-22 05:54:57.311531008 +0000 UTC m=+0.206836417 container attach 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:54:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:54:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:57 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:54:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:54:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:54:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:54:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:54:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:54:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:54:58 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:54:58 compute-0 interesting_poitras[266039]: {
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_id": 1,
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "type": "bluestore"
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     },
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_id": 2,
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "type": "bluestore"
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     },
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_id": 0,
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:         "type": "bluestore"
Nov 22 05:54:58 compute-0 interesting_poitras[266039]:     }
Nov 22 05:54:58 compute-0 interesting_poitras[266039]: }
Nov 22 05:54:58 compute-0 systemd[1]: libpod-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Deactivated successfully.
Nov 22 05:54:58 compute-0 podman[266022]: 2025-11-22 05:54:58.345198111 +0000 UTC m=+1.240503570 container died 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:54:58 compute-0 systemd[1]: libpod-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Consumed 1.041s CPU time.
Nov 22 05:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-3759a8721b98e95a3e613bea3d3f7857f382eb1b25572ff445a571005c57f5be-merged.mount: Deactivated successfully.
Nov 22 05:54:58 compute-0 podman[266022]: 2025-11-22 05:54:58.422335397 +0000 UTC m=+1.317640826 container remove 542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:54:58 compute-0 systemd[1]: libpod-conmon-542e2ebcc9e4990d085b929a8845d0fc8e020de0dbf204c8a9ecec8ee10e73d0.scope: Deactivated successfully.
Nov 22 05:54:58 compute-0 sudo[265917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:54:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:54:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1a9be263-dd44-4ca5-b82b-540e2d3a15ee does not exist
Nov 22 05:54:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 94c3aba4-a2f5-4d6c-b1cd-7c0b0ca75f30 does not exist
Nov 22 05:54:58 compute-0 sudo[266085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:54:58 compute-0 sudo[266085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:58 compute-0 sudo[266085]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:58 compute-0 sudo[266110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:54:58 compute-0 sudo[266110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:54:58 compute-0 sudo[266110]: pam_unix(sudo:session): session closed for user root
Nov 22 05:54:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 118 KiB/s wr, 12 op/s
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:54:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:54:59 compute-0 ceph-mon[75840]: pgmap v1045: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 118 KiB/s wr, 12 op/s
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta.tmp'
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta.tmp' to config b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/.meta'
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:54:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:54:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:54:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 05:55:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 05:55:01 compute-0 ceph-mon[75840]: pgmap v1046: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:55:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:55:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:01 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 05:55:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 05:55:03 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:55:03 compute-0 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Nov 22 05:55:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:03 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:03.030+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 22 05:55:03 compute-0 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 22 05:55:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:03 compute-0 ceph-mon[75840]: pgmap v1047: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 05:55:03 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 podman[266136]: 2025-11-22 05:55:04.236672037 +0000 UTC m=+0.086895803 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 05:55:04 compute-0 podman[266137]: 2025-11-22 05:55:04.236839681 +0000 UTC m=+0.078790501 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta.tmp'
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta.tmp' to config b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946/.meta'
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:55:04 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:55:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:55:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mon[75840]: pgmap v1048: 321 pgs: 321 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:55:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:05 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"} v 0) v1
Nov 22 05:55:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:06 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-758311238 with tenant 7c0b4b3107784ce6890ddd12d362ec8e
Nov 22 05:55:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 05:55:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume authorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-758311238", "caps": ["mds", "allow rw path=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:07 compute-0 ceph-mon[75840]: pgmap v1049: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 05:55:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:55:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:55:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:55:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f776c050-1471-4343-b299-6c3d96952946, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f776c050-1471-4343-b299-6c3d96952946, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:09 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:09.231+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f776c050-1471-4343-b299-6c3d96952946' of type subvolume
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f776c050-1471-4343-b299-6c3d96952946' of type subvolume
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f776c050-1471-4343-b299-6c3d96952946'' moved to trashcan
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:55:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f776c050-1471-4343-b299-6c3d96952946, vol_name:cephfs) < ""
Nov 22 05:55:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:09 compute-0 ceph-mon[75840]: pgmap v1050: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 94 KiB/s wr, 9 op/s
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7'
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07
Nov 22 05:55:10 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07],prefix=session evict} (starting...)
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f776c050-1471-4343-b299-6c3d96952946", "format": "json"}]: dispatch
Nov 22 05:55:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f776c050-1471-4343-b299-6c3d96952946", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 05:55:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:11 compute-0 ceph-mon[75840]: pgmap v1051: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 7 op/s
Nov 22 05:55:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:55:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:12 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:12 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:12 compute-0 nova_compute[255660]: 2025-11-22 05:55:12.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:12 compute-0 nova_compute[255660]: 2025-11-22 05:55:12.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 05:55:12 compute-0 nova_compute[255660]: 2025-11-22 05:55:12.147 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 05:55:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 10 op/s
Nov 22 05:55:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"} v 0) v1
Nov 22 05:55:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"} v 0) v1
Nov 22 05:55:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]': finished
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume deauthorize, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-758311238, client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07
Nov 22 05:55:13 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-758311238,client_metadata.root=/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7/d58a3c87-177b-42b3-a6c7-d38a95691a07],prefix=session evict} (starting...)
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-758311238, format:json, prefix:fs subvolume evict, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mon[75840]: pgmap v1052: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 10 op/s
Nov 22 05:55:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]: dispatch
Nov 22 05:55:13 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-758311238"}]': finished
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 05:55:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "auth_id": "tempest-cephx-id-758311238", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.174 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.174 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.175 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.175 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.176 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:55:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986122239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.709 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:55:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 22 05:55:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 22 05:55:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:15 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:15 compute-0 ceph-mon[75840]: pgmap v1053: 321 pgs: 321 active+clean; 54 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 63 KiB/s wr, 7 op/s
Nov 22 05:55:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/986122239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 22 05:55:15 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.886 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.887 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.887 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.942 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.943 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:55:15 compute-0 nova_compute[255660]: 2025-11-22 05:55:15.958 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:55:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:55:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702995535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.405 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.413 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.434 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.436 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.437 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.438 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:16 compute-0 nova_compute[255660]: 2025-11-22 05:55:16.438 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 05:55:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:55:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice", "format": "json"}]: dispatch
Nov 22 05:55:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/702995535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 22 05:55:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) v1
Nov 22 05:55:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5
Nov 22 05:55:17 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2/c9d51eaa-c51e-44ea-97b2-112d07c2dff5],prefix=session evict} (starting...)
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:17 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:17 compute-0 ceph-mon[75840]: pgmap v1054: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 107 KiB/s wr, 11 op/s
Nov 22 05:55:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 22 05:55:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 22 05:55:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 05:55:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:55:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.427092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919427297, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2361, "num_deletes": 253, "total_data_size": 2906270, "memory_usage": 2964824, "flush_reason": "Manual Compaction"}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 22 05:55:19 compute-0 nova_compute[255660]: 2025-11-22 05:55:19.439 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919450682, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2857694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21186, "largest_seqno": 23546, "table_properties": {"data_size": 2847377, "index_size": 6235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25247, "raw_average_key_size": 21, "raw_value_size": 2825082, "raw_average_value_size": 2382, "num_data_blocks": 276, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790776, "oldest_key_time": 1763790776, "file_creation_time": 1763790919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23542 microseconds, and 11249 cpu microseconds.
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.450754) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2857694 bytes OK
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.450792) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454151) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454190) EVENT_LOG_v1 {"time_micros": 1763790919454180, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.454219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2895562, prev total WAL file size 2895562, number of live WAL files 2.
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.455248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2790KB)], [50(7258KB)]
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919455288, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10290391, "oldest_snapshot_seqno": -1}
Nov 22 05:55:19 compute-0 nova_compute[255660]: 2025-11-22 05:55:19.465 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:19 compute-0 nova_compute[255660]: 2025-11-22 05:55:19.466 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:19 compute-0 nova_compute[255660]: 2025-11-22 05:55:19.466 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:55:19 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5150 keys, 8541238 bytes, temperature: kUnknown
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919523246, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8541238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8504889, "index_size": 22351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 126893, "raw_average_key_size": 24, "raw_value_size": 8410345, "raw_average_value_size": 1633, "num_data_blocks": 932, "num_entries": 5150, "num_filter_entries": 5150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.523613) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8541238 bytes
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.525054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.2 rd, 125.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 5678, records dropped: 528 output_compression: NoCompression
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.525073) EVENT_LOG_v1 {"time_micros": 1763790919525063, "job": 26, "event": "compaction_finished", "compaction_time_micros": 68066, "compaction_time_cpu_micros": 28732, "output_level": 6, "num_output_files": 1, "total_output_size": 8541238, "num_input_records": 5678, "num_output_records": 5150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919525672, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790919527071, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.455191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:55:19.527198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:55:19 compute-0 ceph-mon[75840]: pgmap v1055: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 77 KiB/s wr, 9 op/s
Nov 22 05:55:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:19 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:20 compute-0 nova_compute[255660]: 2025-11-22 05:55:20.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:55:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) v1
Nov 22 05:55:20 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:55:20 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:20.824+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 05:55:20 compute-0 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 22 05:55:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:20 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 22 05:55:21 compute-0 nova_compute[255660]: 2025-11-22 05:55:21.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:21 compute-0 nova_compute[255660]: 2025-11-22 05:55:21.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:21 compute-0 sshd-session[266224]: Invalid user jito from 80.94.92.166 port 46510
Nov 22 05:55:21 compute-0 sshd-session[266224]: Connection closed by invalid user jito 80.94.92.166 port 46510 [preauth]
Nov 22 05:55:21 compute-0 ceph-mon[75840]: pgmap v1056: 321 pgs: 321 active+clean; 55 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 8 op/s
Nov 22 05:55:21 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:22 compute-0 nova_compute[255660]: 2025-11-22 05:55:22.142 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 113 KiB/s wr, 12 op/s
Nov 22 05:55:23 compute-0 nova_compute[255660]: 2025-11-22 05:55:23.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:55:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:55:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:23 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:23 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:23 compute-0 ceph-mon[75840]: pgmap v1057: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 113 KiB/s wr, 12 op/s
Nov 22 05:55:23 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:55:23 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:55:24 compute-0 nova_compute[255660]: 2025-11-22 05:55:24.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:24 compute-0 nova_compute[255660]: 2025-11-22 05:55:24.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:55:24 compute-0 nova_compute[255660]: 2025-11-22 05:55:24.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:55:24 compute-0 nova_compute[255660]: 2025-11-22 05:55:24.160 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:55:24 compute-0 nova_compute[255660]: 2025-11-22 05:55:24.160 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:24 compute-0 podman[266227]: 2025-11-22 05:55:24.309922647 +0000 UTC m=+0.166361861 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:55:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:55:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 05:55:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID david with tenant 525ba1ccf0d546c7b4118a0855306190
Nov 22 05:55:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, tenant_id:525ba1ccf0d546c7b4118a0855306190, vol_name:cephfs) < ""
Nov 22 05:55:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 81 KiB/s wr, 9 op/s
Nov 22 05:55:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_8d03444f-9989-4f30-9672-a2032459f666", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "tenant_id": "525ba1ccf0d546c7b4118a0855306190", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:25 compute-0 ceph-mon[75840]: pgmap v1058: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 81 KiB/s wr, 9 op/s
Nov 22 05:55:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:55:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:26 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice_bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:26 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 113 KiB/s wr, 11 op/s
Nov 22 05:55:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:27 compute-0 ceph-mon[75840]: pgmap v1059: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 113 KiB/s wr, 11 op/s
Nov 22 05:55:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta.tmp'
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta.tmp' to config b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/.meta'
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:55:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 9 op/s
Nov 22 05:55:28 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:28 compute-0 ceph-mon[75840]: pgmap v1060: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 70 KiB/s wr, 9 op/s
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 22 05:55:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 22 05:55:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:30 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 22 05:55:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 22 05:55:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 05:55:31 compute-0 nova_compute[255660]: 2025-11-22 05:55:31.417 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:55:31 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:31.441 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:55:31 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:31.442 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:55:31 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:31 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 22 05:55:31 compute-0 ceph-mon[75840]: pgmap v1061: 321 pgs: 321 active+clean; 55 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 05:55:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 05:55:31 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:31 compute-0 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Nov 22 05:55:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, tenant_id:7c0b4b3107784ce6890ddd12d362ec8e, vol_name:cephfs) < ""
Nov 22 05:55:31 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:31.867+0000 7f5339360640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 22 05:55:31 compute-0 ceph-mgr[76134]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 22 05:55:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "tenant_id": "7c0b4b3107784ce6890ddd12d362ec8e", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:32 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 101 KiB/s wr, 11 op/s
Nov 22 05:55:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:33 compute-0 ceph-mon[75840]: pgmap v1062: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 101 KiB/s wr, 11 op/s
Nov 22 05:55:33 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:55:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:33 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:34 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:55:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:35 compute-0 podman[266254]: 2025-11-22 05:55:35.238744606 +0000 UTC m=+0.089023671 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 05:55:35 compute-0 podman[266255]: 2025-11-22 05:55:35.245659175 +0000 UTC m=+0.087734276 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 05:55:35 compute-0 ceph-mon[75840]: pgmap v1063: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:55:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3'
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/ee7c5dcf-f783-40a3-8f91-0b5e06a2a160
Nov 22 05:55:36 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3/ee7c5dcf-f783-40a3-8f91-0b5e06a2a160],prefix=session evict} (starting...)
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.445 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:55:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:55:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.935 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:55:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:55:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:55:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:55:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:37 compute-0 ceph-mon[75840]: pgmap v1064: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 86 KiB/s wr, 8 op/s
Nov 22 05:55:37 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:38 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 05:55:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:55:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:55:38 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:39 compute-0 ceph-mon[75840]: pgmap v1065: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 05:55:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:39 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 22 05:55:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) v1
Nov 22 05:55:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6
Nov 22 05:55:40 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666/37e2a624-9908-48cd-a1ec-1b287d7f34c6],prefix=session evict} (starting...)
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:40 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 22 05:55:40 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 22 05:55:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 5 op/s
Nov 22 05:55:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:55:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID alice bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:41 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "david", "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mon[75840]: pgmap v1066: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 54 KiB/s wr, 5 op/s
Nov 22 05:55:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:41 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:55:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7325 writes, 29K keys, 7325 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7325 writes, 1543 syncs, 4.75 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1699 writes, 5316 keys, 1699 commit groups, 1.0 writes per commit group, ingest: 7.07 MB, 0.01 MB/s
                                           Interval WAL: 1699 writes, 663 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:55:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "r", "format": "json"}]: dispatch
Nov 22 05:55:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 101 KiB/s wr, 10 op/s
Nov 22 05:55:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:43 compute-0 ceph-mon[75840]: pgmap v1067: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 101 KiB/s wr, 10 op/s
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:43 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:43.696+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3' of type subvolume
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7bcf26e0-68e7-4e86-801e-5338f311cec3' of type subvolume
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7bcf26e0-68e7-4e86-801e-5338f311cec3'' moved to trashcan
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7bcf26e0-68e7-4e86-801e-5338f311cec3, vol_name:cephfs) < ""
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:55:43
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:55:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:55:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "format": "json"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7bcf26e0-68e7-4e86-801e-5338f311cec3", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 22 05:55:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 22 05:55:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:55:44 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:55:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:55:45 compute-0 ceph-mon[75840]: pgmap v1068: 321 pgs: 321 active+clean; 56 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 69 KiB/s wr, 7 op/s
Nov 22 05:55:45 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 22 05:55:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 22 05:55:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 22 05:55:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 22 05:55:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 05:55:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:55:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:55:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:55:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:47 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:47.358+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7' of type subvolume
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7' of type subvolume
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7'' moved to trashcan
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:55:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7, vol_name:cephfs) < ""
Nov 22 05:55:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:55:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9133 writes, 36K keys, 9133 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 9133 writes, 2084 syncs, 4.38 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2182 writes, 7209 keys, 2182 commit groups, 1.0 writes per commit group, ingest: 7.59 MB, 0.01 MB/s
                                           Interval WAL: 2182 writes, 839 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:55:47 compute-0 ceph-mon[75840]: pgmap v1069: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 89 KiB/s wr, 9 op/s
Nov 22 05:55:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:55:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1499730367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 05:55:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID bob with tenant 94bcd246264e4a03b75056b04f28dee8
Nov 22 05:55:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:55:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fdc0a474-b1f3-43ab-b94f-a0eefb55e7b7", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:55:48 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:55:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 05:55:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:49 compute-0 ceph-mon[75840]: pgmap v1070: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2' of type subvolume
Nov 22 05:55:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:50.891+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2' of type subvolume
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2'' moved to trashcan
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:55:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2, vol_name:cephfs) < ""
Nov 22 05:55:51 compute-0 ceph-mon[75840]: pgmap v1071: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 67 KiB/s wr, 8 op/s
Nov 22 05:55:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "format": "json"}]: dispatch
Nov 22 05:55:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f96efe4e-dc4d-4c2a-8b79-a56ce8d9f5a2", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 KiB/s wr, 11 op/s
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta.tmp'
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta.tmp' to config b'/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/.meta'
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "format": "json"}]: dispatch
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:55:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:55:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:55:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00025697005030279353 of space, bias 4.0, pg target 0.30836406036335223 quantized to 16 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:55:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:55:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 05:55:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s
                                           Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 05:55:53 compute-0 ceph-mon[75840]: pgmap v1072: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 102 KiB/s wr, 11 op/s
Nov 22 05:55:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:55:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "format": "json"}]: dispatch
Nov 22 05:55:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "format": "json"}]: dispatch
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:54.650+0000 7f5339360640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d03444f-9989-4f30-9672-a2032459f666, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d03444f-9989-4f30-9672-a2032459f666, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:55:54.737+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d03444f-9989-4f30-9672-a2032459f666' of type subvolume
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d03444f-9989-4f30-9672-a2032459f666' of type subvolume
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d03444f-9989-4f30-9672-a2032459f666'' moved to trashcan
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d03444f-9989-4f30-9672-a2032459f666, vol_name:cephfs) < ""
Nov 22 05:55:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 55 KiB/s wr, 6 op/s
Nov 22 05:55:55 compute-0 podman[266298]: 2025-11-22 05:55:55.28131618 +0000 UTC m=+0.132049616 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:55:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "auth_id": "admin", "format": "json"}]: dispatch
Nov 22 05:55:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d03444f-9989-4f30-9672-a2032459f666", "format": "json"}]: dispatch
Nov 22 05:55:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d03444f-9989-4f30-9672-a2032459f666", "force": true, "format": "json"}]: dispatch
Nov 22 05:55:55 compute-0 ceph-mon[75840]: pgmap v1073: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 55 KiB/s wr, 6 op/s
Nov 22 05:55:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 7 op/s
Nov 22 05:55:56 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 05:55:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 05:55:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]} v 0) v1
Nov 22 05:55:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]': finished
Nov 22 05:55:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 05:55:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, tenant_id:94bcd246264e4a03b75056b04f28dee8, vol_name:cephfs) < ""
Nov 22 05:55:57 compute-0 ceph-mon[75840]: pgmap v1074: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 7 op/s
Nov 22 05:55:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]: dispatch
Nov 22 05:55:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6,allow rw path=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_298b8575-0ab5-4c93-992c-f312a6379d92"]}]': finished
Nov 22 05:55:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:55:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:55:58 compute-0 sudo[266325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:55:58 compute-0 sudo[266325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:58 compute-0 sudo[266325]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:58 compute-0 sudo[266350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:55:58 compute-0 sudo[266350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:58 compute-0 sudo[266350]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 05:55:58 compute-0 sudo[266375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:55:58 compute-0 sudo[266375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:58 compute-0 sudo[266375]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "tenant_id": "94bcd246264e4a03b75056b04f28dee8", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:55:58 compute-0 sudo[266400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 05:55:58 compute-0 sudo[266400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:59 compute-0 sudo[266400]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:55:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:55:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:55:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:55:59 compute-0 sudo[266447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:55:59 compute-0 sudo[266447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:59 compute-0 sudo[266447]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:59 compute-0 sudo[266472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:55:59 compute-0 sudo[266472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:59 compute-0 sudo[266472]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:59 compute-0 sudo[266497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:55:59 compute-0 sudo[266497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:59 compute-0 sudo[266497]: pam_unix(sudo:session): session closed for user root
Nov 22 05:55:59 compute-0 sudo[266522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:55:59 compute-0 sudo[266522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:55:59 compute-0 ceph-mon[75840]: pgmap v1075: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 54 KiB/s wr, 6 op/s
Nov 22 05:55:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:55:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:00 compute-0 sudo[266522]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:00 compute-0 sudo[266578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:00 compute-0 sudo[266578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:00 compute-0 sudo[266578]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:00 compute-0 sudo[266603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:56:00 compute-0 sudo[266603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:00 compute-0 sudo[266603]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:00 compute-0 sudo[266628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:00 compute-0 sudo[266628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:00 compute-0 sudo[266628]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:56:00 compute-0 sudo[266653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 05:56:00 compute-0 sudo[266653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:56:00 compute-0 sudo[266653]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:56:00 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:56:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 05:56:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:56:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]} v 0) v1
Nov 22 05:56:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]: dispatch
Nov 22 05:56:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:01 compute-0 sudo[266696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:01 compute-0 sudo[266696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:01 compute-0 sudo[266696]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:01 compute-0 sudo[266721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:56:01 compute-0 sudo[266721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:01 compute-0 sudo[266721]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]': finished
Nov 22 05:56:01 compute-0 sudo[266746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:01 compute-0 sudo[266746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:01 compute-0 sudo[266746]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:01 compute-0 sudo[266771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- inventory --format=json-pretty --filter-for-batch
Nov 22 05:56:01 compute-0 sudo[266771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2
Nov 22 05:56:01 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/298b8575-0ab5-4c93-992c-f312a6379d92/38fc03b7-57f2-4da0-9099-80a0070afed2],prefix=session evict} (starting...)
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:56:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:298b8575-0ab5-4c93-992c-f312a6379d92, vol_name:cephfs) < ""
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.69245882 +0000 UTC m=+0.082895714 container create 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.633444518 +0000 UTC m=+0.023881442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:01 compute-0 systemd[1]: Started libpod-conmon-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope.
Nov 22 05:56:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.913002269 +0000 UTC m=+0.303439193 container init 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.922452687 +0000 UTC m=+0.312889581 container start 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 05:56:01 compute-0 flamboyant_knuth[266853]: 167 167
Nov 22 05:56:01 compute-0 systemd[1]: libpod-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope: Deactivated successfully.
Nov 22 05:56:01 compute-0 conmon[266853]: conmon 036599f02dd36edab434 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope/container/memory.events
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.940284104 +0000 UTC m=+0.330721038 container attach 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:56:01 compute-0 podman[266837]: 2025-11-22 05:56:01.942210316 +0000 UTC m=+0.332647210 container died 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:02 compute-0 ceph-mon[75840]: pgmap v1076: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]: dispatch
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:02 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_5fe2732a-575f-4985-a0be-d017e158a52a"]}]': finished
Nov 22 05:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bc461d74da757971882d6a4baadf1673e402be24a34fe90448209a187658775-merged.mount: Deactivated successfully.
Nov 22 05:56:02 compute-0 podman[266837]: 2025-11-22 05:56:02.244746884 +0000 UTC m=+0.635183788 container remove 036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:56:02 compute-0 systemd[1]: libpod-conmon-036599f02dd36edab4344bcf4c91353ba538f508a7ce62aa78bb58269dc6d341.scope: Deactivated successfully.
Nov 22 05:56:02 compute-0 podman[266878]: 2025-11-22 05:56:02.478280648 +0000 UTC m=+0.068474180 container create 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:56:02 compute-0 systemd[1]: Started libpod-conmon-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope.
Nov 22 05:56:02 compute-0 podman[266878]: 2025-11-22 05:56:02.438222275 +0000 UTC m=+0.028415767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:02 compute-0 podman[266878]: 2025-11-22 05:56:02.598910541 +0000 UTC m=+0.189104033 container init 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:56:02 compute-0 podman[266878]: 2025-11-22 05:56:02.610136497 +0000 UTC m=+0.200329999 container start 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:02 compute-0 podman[266878]: 2025-11-22 05:56:02.616074719 +0000 UTC m=+0.206268231 container attach 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 05:56:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 7 op/s
Nov 22 05:56:03 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "298b8575-0ab5-4c93-992c-f312a6379d92", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:03 compute-0 ceph-mon[75840]: pgmap v1077: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 70 KiB/s wr, 7 op/s
Nov 22 05:56:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:04 compute-0 frosty_austin[266896]: [
Nov 22 05:56:04 compute-0 frosty_austin[266896]:     {
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "available": false,
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "ceph_device": false,
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "lsm_data": {},
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "lvs": [],
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "path": "/dev/sr0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "rejected_reasons": [
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "Has a FileSystem",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "Insufficient space (<5GB)"
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         ],
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         "sys_api": {
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "actuators": null,
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "device_nodes": "sr0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "devname": "sr0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "human_readable_size": "482.00 KB",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "id_bus": "ata",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "model": "QEMU DVD-ROM",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "nr_requests": "2",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "parent": "/dev/sr0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "partitions": {},
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "path": "/dev/sr0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "removable": "1",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "rev": "2.5+",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "ro": "0",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "rotational": "1",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "sas_address": "",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "sas_device_handle": "",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "scheduler_mode": "mq-deadline",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "sectors": 0,
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "sectorsize": "2048",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "size": 493568.0,
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "support_discard": "2048",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "type": "disk",
Nov 22 05:56:04 compute-0 frosty_austin[266896]:             "vendor": "QEMU"
Nov 22 05:56:04 compute-0 frosty_austin[266896]:         }
Nov 22 05:56:04 compute-0 frosty_austin[266896]:     }
Nov 22 05:56:04 compute-0 frosty_austin[266896]: ]
Nov 22 05:56:04 compute-0 systemd[1]: libpod-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Deactivated successfully.
Nov 22 05:56:04 compute-0 systemd[1]: libpod-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Consumed 1.577s CPU time.
Nov 22 05:56:04 compute-0 podman[268726]: 2025-11-22 05:56:04.207392674 +0000 UTC m=+0.042249664 container died 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a14b1a9bf1739a21e10b68d44aa1d9b530511f4bbe3fd3d687c566dceeb088b-merged.mount: Deactivated successfully.
Nov 22 05:56:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 4 op/s
Nov 22 05:56:05 compute-0 podman[268726]: 2025-11-22 05:56:05.029871943 +0000 UTC m=+0.864728983 container remove 8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:56:05 compute-0 systemd[1]: libpod-conmon-8788b2ea2a8ffb7e2f0abae500127fb5405f8140b4ffa57d4653e98a00a69c8e.scope: Deactivated successfully.
Nov 22 05:56:05 compute-0 sudo[266771]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: pgmap v1078: 321 pgs: 321 active+clean; 57 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 34 KiB/s wr, 4 op/s
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 67dee7db-6e16-4aa2-a8a2-b49037d7057f does not exist
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3408f30e-63c9-450e-9d1a-ef88768e8522 does not exist
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c44dff1a-dff0-4db4-92f9-f1c318c4c189 does not exist
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:56:05 compute-0 sudo[268741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:05 compute-0 sudo[268741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:05 compute-0 sudo[268741]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:05 compute-0 podman[268766]: 2025-11-22 05:56:05.447727298 +0000 UTC m=+0.078063682 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:56:05 compute-0 podman[268765]: 2025-11-22 05:56:05.463503489 +0000 UTC m=+0.095061146 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 05:56:05 compute-0 sudo[268787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:56:05 compute-0 sudo[268787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:05 compute-0 sudo[268787]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:05 compute-0 sudo[268832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:05 compute-0 sudo[268832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:05 compute-0 sudo[268832]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:05 compute-0 sudo[268857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:56:05 compute-0 sudo[268857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) v1
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6
Nov 22 05:56:05 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a/1969f43c-19e4-483b-9ce9-418a6248dbb6],prefix=session evict} (starting...)
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:56:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:05 compute-0 podman[268923]: 2025-11-22 05:56:05.983218967 +0000 UTC m=+0.044449468 container create 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:56:06 compute-0 systemd[1]: Started libpod-conmon-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope.
Nov 22 05:56:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:05.964459127 +0000 UTC m=+0.025689628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:06.075725174 +0000 UTC m=+0.136955735 container init 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:06.084373535 +0000 UTC m=+0.145604026 container start 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:06.088456444 +0000 UTC m=+0.149687015 container attach 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:56:06 compute-0 laughing_leakey[268939]: 167 167
Nov 22 05:56:06 compute-0 systemd[1]: libpod-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope: Deactivated successfully.
Nov 22 05:56:06 compute-0 conmon[268939]: conmon 7783a27151ee2a86a626 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope/container/memory.events
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:06.09280864 +0000 UTC m=+0.154039141 container died 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 05:56:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0df53d637e25280966a1034e26b40383181424938d6f228a1747260f7cfd27-merged.mount: Deactivated successfully.
Nov 22 05:56:06 compute-0 podman[268923]: 2025-11-22 05:56:06.198627083 +0000 UTC m=+0.259857584 container remove 7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 22 05:56:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "auth_id": "bob", "format": "json"}]: dispatch
Nov 22 05:56:06 compute-0 systemd[1]: libpod-conmon-7783a27151ee2a86a626d0329b49cb8a832f5dc6a8e6d94d4949f8da374735db.scope: Deactivated successfully.
Nov 22 05:56:06 compute-0 podman[268963]: 2025-11-22 05:56:06.437633921 +0000 UTC m=+0.059781757 container create 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 05:56:06 compute-0 systemd[1]: Started libpod-conmon-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope.
Nov 22 05:56:06 compute-0 podman[268963]: 2025-11-22 05:56:06.408550074 +0000 UTC m=+0.030697990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:06 compute-0 podman[268963]: 2025-11-22 05:56:06.548133239 +0000 UTC m=+0.170281115 container init 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:06 compute-0 podman[268963]: 2025-11-22 05:56:06.56506702 +0000 UTC m=+0.187214866 container start 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:06 compute-0 podman[268963]: 2025-11-22 05:56:06.569645462 +0000 UTC m=+0.191793318 container attach 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Nov 22 05:56:07 compute-0 ceph-mon[75840]: pgmap v1079: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 53 KiB/s wr, 6 op/s
Nov 22 05:56:07 compute-0 objective_brahmagupta[268979]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:56:07 compute-0 objective_brahmagupta[268979]: --> relative data size: 1.0
Nov 22 05:56:07 compute-0 objective_brahmagupta[268979]: --> All data devices are unavailable
Nov 22 05:56:07 compute-0 systemd[1]: libpod-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Deactivated successfully.
Nov 22 05:56:07 compute-0 podman[268963]: 2025-11-22 05:56:07.787757661 +0000 UTC m=+1.409905507 container died 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:56:07 compute-0 systemd[1]: libpod-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Consumed 1.167s CPU time.
Nov 22 05:56:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c362ee17727ff046887916fe1f05be3047c2243d78275095763d99ee54b652b7-merged.mount: Deactivated successfully.
Nov 22 05:56:07 compute-0 podman[268963]: 2025-11-22 05:56:07.859980027 +0000 UTC m=+1.482127843 container remove 6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:56:07 compute-0 systemd[1]: libpod-conmon-6276db5a2ee0f19c2792a02fcd9b864ad1ae6fda0b525a3694f626b596f87393.scope: Deactivated successfully.
Nov 22 05:56:07 compute-0 sudo[268857]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:07 compute-0 sudo[269019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:07 compute-0 sudo[269019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:07 compute-0 sudo[269019]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:08 compute-0 sudo[269044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:56:08 compute-0 sudo[269044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:08 compute-0 sudo[269044]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:08 compute-0 sudo[269069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:08 compute-0 sudo[269069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:08 compute-0 sudo[269069]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:08 compute-0 sudo[269094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:56:08 compute-0 sudo[269094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.518704662 +0000 UTC m=+0.047268292 container create 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:56:08 compute-0 systemd[1]: Started libpod-conmon-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope.
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.493828618 +0000 UTC m=+0.022392278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.649262185 +0000 UTC m=+0.177825895 container init 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.658300927 +0000 UTC m=+0.186864557 container start 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:56:08 compute-0 zealous_mclean[269176]: 167 167
Nov 22 05:56:08 compute-0 systemd[1]: libpod-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope: Deactivated successfully.
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.679976025 +0000 UTC m=+0.208539745 container attach 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.680815638 +0000 UTC m=+0.209379328 container died 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f3ee24f5645f64960b75b56e229300e3a62361c4437be8b1fa03ce0db11f7a-merged.mount: Deactivated successfully.
Nov 22 05:56:08 compute-0 podman[269160]: 2025-11-22 05:56:08.772535184 +0000 UTC m=+0.301098844 container remove 21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:56:08 compute-0 systemd[1]: libpod-conmon-21c755e9f10410c21091e204cd15a829484558db42a35751d02b2145ae6492f4.scope: Deactivated successfully.
Nov 22 05:56:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 58 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 5 op/s
Nov 22 05:56:08 compute-0 podman[269202]: 2025-11-22 05:56:08.990366536 +0000 UTC m=+0.055129272 container create c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:56:09 compute-0 systemd[1]: Started libpod-conmon-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope.
Nov 22 05:56:09 compute-0 podman[269202]: 2025-11-22 05:56:08.963852049 +0000 UTC m=+0.028614835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:09 compute-0 podman[269202]: 2025-11-22 05:56:09.113973024 +0000 UTC m=+0.178735780 container init c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:56:09 compute-0 podman[269202]: 2025-11-22 05:56:09.122234634 +0000 UTC m=+0.186997400 container start c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 05:56:09 compute-0 podman[269202]: 2025-11-22 05:56:09.132288782 +0000 UTC m=+0.197051538 container attach c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:56:09 compute-0 trusting_carver[269219]: {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     "0": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "devices": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "/dev/loop3"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             ],
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_name": "ceph_lv0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_size": "21470642176",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "name": "ceph_lv0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "tags": {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_name": "ceph",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.crush_device_class": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.encrypted": "0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_id": "0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.vdo": "0"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             },
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "vg_name": "ceph_vg0"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         }
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     ],
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     "1": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "devices": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "/dev/loop4"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             ],
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_name": "ceph_lv1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_size": "21470642176",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "name": "ceph_lv1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "tags": {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_name": "ceph",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.crush_device_class": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.encrypted": "0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_id": "1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.vdo": "0"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             },
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "vg_name": "ceph_vg1"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         }
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     ],
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     "2": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "devices": [
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "/dev/loop5"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             ],
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_name": "ceph_lv2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_size": "21470642176",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "name": "ceph_lv2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "tags": {
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.cluster_name": "ceph",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.crush_device_class": "",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.encrypted": "0",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osd_id": "2",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:                 "ceph.vdo": "0"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             },
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "type": "block",
Nov 22 05:56:09 compute-0 trusting_carver[269219]:             "vg_name": "ceph_vg2"
Nov 22 05:56:09 compute-0 trusting_carver[269219]:         }
Nov 22 05:56:09 compute-0 trusting_carver[269219]:     ]
Nov 22 05:56:09 compute-0 trusting_carver[269219]: }
Nov 22 05:56:09 compute-0 systemd[1]: libpod-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope: Deactivated successfully.
Nov 22 05:56:09 compute-0 podman[269202]: 2025-11-22 05:56:09.879098947 +0000 UTC m=+0.943861743 container died c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:10 compute-0 ceph-mon[75840]: pgmap v1080: 321 pgs: 321 active+clean; 58 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 5 op/s
Nov 22 05:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-abdcd7c7e7d2e4ff20f0d9baf2e9d98517ae9c0ed0a6aad44d165c9aeab46d76-merged.mount: Deactivated successfully.
Nov 22 05:56:10 compute-0 podman[269202]: 2025-11-22 05:56:10.679548822 +0000 UTC m=+1.744311588 container remove c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 05:56:10 compute-0 systemd[1]: libpod-conmon-c7590890a7c869fae2e286a14d47258da56a8d1852e5a196fef00ea5cf0c147a.scope: Deactivated successfully.
Nov 22 05:56:10 compute-0 sudo[269094]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Nov 22 05:56:10 compute-0 sudo[269242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:10 compute-0 sudo[269242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:10 compute-0 sudo[269242]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:10 compute-0 sudo[269267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:56:10 compute-0 sudo[269267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:10 compute-0 sudo[269267]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:10 compute-0 sudo[269292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:11 compute-0 sudo[269292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:11 compute-0 sudo[269292]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:11 compute-0 sudo[269317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:56:11 compute-0 sudo[269317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:11 compute-0 ceph-mon[75840]: pgmap v1081: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s wr, 5 op/s
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.521189496 +0000 UTC m=+0.102558257 container create 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.458284028 +0000 UTC m=+0.039652849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:11 compute-0 systemd[1]: Started libpod-conmon-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope.
Nov 22 05:56:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.66111402 +0000 UTC m=+0.242482831 container init 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.669579066 +0000 UTC m=+0.250947827 container start 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 05:56:11 compute-0 bold_jennings[269399]: 167 167
Nov 22 05:56:11 compute-0 systemd[1]: libpod-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope: Deactivated successfully.
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.703582993 +0000 UTC m=+0.284951724 container attach 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.703896862 +0000 UTC m=+0.285265593 container died 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-797dea0ba4dd26d39a4b861e03d9b20e7ac13087b39f8cc6ac29da5e43be5e2b-merged.mount: Deactivated successfully.
Nov 22 05:56:11 compute-0 podman[269383]: 2025-11-22 05:56:11.868117473 +0000 UTC m=+0.449486224 container remove 135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jennings, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:11 compute-0 systemd[1]: libpod-conmon-135f924c10c1056bcfe0447b31d7f86badcf3564aaad367ffb8389dfc76a6ce0.scope: Deactivated successfully.
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "format": "json"}]: dispatch
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5fe2732a-575f-4985-a0be-d017e158a52a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5fe2732a-575f-4985-a0be-d017e158a52a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:12 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:12.153+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fe2732a-575f-4985-a0be-d017e158a52a' of type subvolume
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5fe2732a-575f-4985-a0be-d017e158a52a' of type subvolume
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5fe2732a-575f-4985-a0be-d017e158a52a'' moved to trashcan
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5fe2732a-575f-4985-a0be-d017e158a52a, vol_name:cephfs) < ""
Nov 22 05:56:12 compute-0 podman[269425]: 2025-11-22 05:56:12.082854251 +0000 UTC m=+0.031409508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:56:12 compute-0 podman[269425]: 2025-11-22 05:56:12.262657428 +0000 UTC m=+0.211212595 container create 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 05:56:12 compute-0 systemd[1]: Started libpod-conmon-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope.
Nov 22 05:56:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:56:12 compute-0 podman[269425]: 2025-11-22 05:56:12.445141297 +0000 UTC m=+0.393696534 container init 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:56:12 compute-0 podman[269425]: 2025-11-22 05:56:12.460324862 +0000 UTC m=+0.408880049 container start 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:56:12 compute-0 podman[269425]: 2025-11-22 05:56:12.592232831 +0000 UTC m=+0.540788028 container attach 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 05:56:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 42 KiB/s wr, 4 op/s
Nov 22 05:56:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:13 compute-0 silly_curran[269442]: {
Nov 22 05:56:13 compute-0 silly_curran[269442]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_id": 1,
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "type": "bluestore"
Nov 22 05:56:13 compute-0 silly_curran[269442]:     },
Nov 22 05:56:13 compute-0 silly_curran[269442]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_id": 2,
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "type": "bluestore"
Nov 22 05:56:13 compute-0 silly_curran[269442]:     },
Nov 22 05:56:13 compute-0 silly_curran[269442]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_id": 0,
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:56:13 compute-0 silly_curran[269442]:         "type": "bluestore"
Nov 22 05:56:13 compute-0 silly_curran[269442]:     }
Nov 22 05:56:13 compute-0 silly_curran[269442]: }
Nov 22 05:56:13 compute-0 systemd[1]: libpod-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Deactivated successfully.
Nov 22 05:56:13 compute-0 systemd[1]: libpod-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Consumed 1.102s CPU time.
Nov 22 05:56:13 compute-0 podman[269475]: 2025-11-22 05:56:13.593944877 +0000 UTC m=+0.023754735 container died 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 05:56:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b6034fe4674a90fa1b93580382e9881fc27bee92515039b1a5ea450f640b5d1-merged.mount: Deactivated successfully.
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:13 compute-0 podman[269475]: 2025-11-22 05:56:13.827529269 +0000 UTC m=+0.257339057 container remove 5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:56:13 compute-0 systemd[1]: libpod-conmon-5dab1a29c6cc1f76b07aed445f26514ae1f24b1cd9ae93b12fb03be6a8c47d2d.scope: Deactivated successfully.
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:13 compute-0 sudo[269317]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:56:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "format": "json"}]: dispatch
Nov 22 05:56:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5fe2732a-575f-4985-a0be-d017e158a52a", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:13 compute-0 ceph-mon[75840]: pgmap v1082: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 42 KiB/s wr, 4 op/s
Nov 22 05:56:13 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:56:14 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev bc40a5d8-3215-4baf-816a-e082b58f5f8d does not exist
Nov 22 05:56:14 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 94c08024-d18c-45a0-ad5e-5d1bcf3c4689 does not exist
Nov 22 05:56:14 compute-0 sudo[269491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:56:14 compute-0 sudo[269491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:14 compute-0 sudo[269491]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:14 compute-0 sudo[269516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:56:14 compute-0 sudo[269516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:56:14 compute-0 sudo[269516]: pam_unix(sudo:session): session closed for user root
Nov 22 05:56:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 36 KiB/s wr, 3 op/s
Nov 22 05:56:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:14 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.162 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:56:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:56:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/401438297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.619 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.784 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:56:15 compute-0 nova_compute[255660]: 2025-11-22 05:56:15.785 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:56:16 compute-0 ceph-mon[75840]: pgmap v1083: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 36 KiB/s wr, 3 op/s
Nov 22 05:56:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/401438297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.397 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.397 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.633 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.699 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.700 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.717 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.734 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 05:56:16 compute-0 nova_compute[255660]: 2025-11-22 05:56:16.751 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:56:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 45 KiB/s wr, 4 op/s
Nov 22 05:56:17 compute-0 ceph-mon[75840]: pgmap v1084: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 45 KiB/s wr, 4 op/s
Nov 22 05:56:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:56:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968414510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:56:17 compute-0 nova_compute[255660]: 2025-11-22 05:56:17.221 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:56:17 compute-0 nova_compute[255660]: 2025-11-22 05:56:17.227 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:56:17 compute-0 nova_compute[255660]: 2025-11-22 05:56:17.248 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:56:17 compute-0 nova_compute[255660]: 2025-11-22 05:56:17.250 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:56:17 compute-0 nova_compute[255660]: 2025-11-22 05:56:17.250 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:56:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/968414510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:56:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.317312) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978317350, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1045, "num_deletes": 257, "total_data_size": 1168468, "memory_usage": 1191408, "flush_reason": "Manual Compaction"}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978326323, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1122572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23547, "largest_seqno": 24591, "table_properties": {"data_size": 1117614, "index_size": 2354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11745, "raw_average_key_size": 19, "raw_value_size": 1107103, "raw_average_value_size": 1835, "num_data_blocks": 106, "num_entries": 603, "num_filter_entries": 603, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790919, "oldest_key_time": 1763790919, "file_creation_time": 1763790978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9056 microseconds, and 5552 cpu microseconds.
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.326368) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1122572 bytes OK
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.326389) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328545) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328571) EVENT_LOG_v1 {"time_micros": 1763790978328563, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.328593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1163248, prev total WAL file size 1163248, number of live WAL files 2.
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.329391) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1096KB)], [53(8341KB)]
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978329443, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9663810, "oldest_snapshot_seqno": -1}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5224 keys, 9567487 bytes, temperature: kUnknown
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978434117, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9567487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528949, "index_size": 24348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 129901, "raw_average_key_size": 24, "raw_value_size": 9431517, "raw_average_value_size": 1805, "num_data_blocks": 1017, "num_entries": 5224, "num_filter_entries": 5224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763790978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.434539) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9567487 bytes
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.436643) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.2 rd, 91.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(17.1) write-amplify(8.5) OK, records in: 5753, records dropped: 529 output_compression: NoCompression
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.436671) EVENT_LOG_v1 {"time_micros": 1763790978436658, "job": 28, "event": "compaction_finished", "compaction_time_micros": 104813, "compaction_time_cpu_micros": 22400, "output_level": 6, "num_output_files": 1, "total_output_size": 9567487, "num_input_records": 5753, "num_output_records": 5224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978437230, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763790978440235, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.329297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:56:18.440386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:56:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 27 KiB/s wr, 3 op/s
Nov 22 05:56:19 compute-0 ceph-mon[75840]: pgmap v1085: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 27 KiB/s wr, 3 op/s
Nov 22 05:56:20 compute-0 nova_compute[255660]: 2025-11-22 05:56:20.251 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:20 compute-0 nova_compute[255660]: 2025-11-22 05:56:20.251 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:20 compute-0 nova_compute[255660]: 2025-11-22 05:56:20.252 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:56:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 3 op/s
Nov 22 05:56:21 compute-0 ceph-mon[75840]: pgmap v1086: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 32 KiB/s wr, 3 op/s
Nov 22 05:56:22 compute-0 nova_compute[255660]: 2025-11-22 05:56:22.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 14 KiB/s wr, 2 op/s
Nov 22 05:56:23 compute-0 nova_compute[255660]: 2025-11-22 05:56:23.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:23 compute-0 nova_compute[255660]: 2025-11-22 05:56:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:23 compute-0 ceph-mon[75840]: pgmap v1087: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 14 KiB/s wr, 2 op/s
Nov 22 05:56:24 compute-0 nova_compute[255660]: 2025-11-22 05:56:24.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 05:56:25 compute-0 nova_compute[255660]: 2025-11-22 05:56:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:25 compute-0 nova_compute[255660]: 2025-11-22 05:56:25.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:56:25 compute-0 nova_compute[255660]: 2025-11-22 05:56:25.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:56:25 compute-0 nova_compute[255660]: 2025-11-22 05:56:25.545 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:56:25 compute-0 nova_compute[255660]: 2025-11-22 05:56:25.546 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:56:25 compute-0 ceph-mon[75840]: pgmap v1088: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 05:56:26 compute-0 podman[269585]: 2025-11-22 05:56:26.298297902 +0000 UTC m=+0.147247390 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 05:56:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 05:56:27 compute-0 ceph-mon[75840]: pgmap v1089: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta.tmp'
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta.tmp' to config b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/.meta'
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:56:27 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:56:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:27 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:28 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:28 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 05:56:28 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.4 KiB/s wr, 1 op/s
Nov 22 05:56:29 compute-0 ceph-mon[75840]: pgmap v1090: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 5.4 KiB/s wr, 1 op/s
Nov 22 05:56:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 22 05:56:31 compute-0 ceph-mon[75840]: pgmap v1091: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta.tmp'
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta.tmp' to config b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/.meta'
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:32 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 05:56:32 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 05:56:33 compute-0 ceph-mon[75840]: pgmap v1092: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:34 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.400 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:56:34 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.402 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:56:34 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:34.403 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta.tmp'
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta.tmp' to config b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620/.meta'
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 05:56:34 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:56:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:56:35 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:56:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mon[75840]: pgmap v1093: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s wr, 0 op/s
Nov 22 05:56:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_6e38d87a-ae0f-4d08-9b46-1181605e24ce", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:36 compute-0 podman[269610]: 2025-11-22 05:56:36.202224246 +0000 UTC m=+0.060258549 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 05:56:36 compute-0 podman[269611]: 2025-11-22 05:56:36.24435412 +0000 UTC m=+0.089161360 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:56:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 22 05:56:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.936 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:56:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:56:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:56:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:56:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:37 compute-0 ceph-mon[75840]: pgmap v1094: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 1 op/s
Nov 22 05:56:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:39.030+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e91eaa5-0ca4-4703-941f-d4b008c28620' of type subvolume
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e91eaa5-0ca4-4703-941f-d4b008c28620' of type subvolume
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e91eaa5-0ca4-4703-941f-d4b008c28620'' moved to trashcan
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e91eaa5-0ca4-4703-941f-d4b008c28620, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:56:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:56:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67
Nov 22 05:56:39 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce/dfb8d971-a771-4d78-801e-f56a0c897c67],prefix=session evict} (starting...)
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:39.572+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e38d87a-ae0f-4d08-9b46-1181605e24ce' of type subvolume
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6e38d87a-ae0f-4d08-9b46-1181605e24ce' of type subvolume
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6e38d87a-ae0f-4d08-9b46-1181605e24ce'' moved to trashcan
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e38d87a-ae0f-4d08-9b46-1181605e24ce, vol_name:cephfs) < ""
Nov 22 05:56:39 compute-0 ceph-mon[75840]: pgmap v1095: 321 pgs: 321 active+clean; 58 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s wr, 2 op/s
Nov 22 05:56:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:56:39 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:56:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 4 op/s
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "format": "json"}]: dispatch
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e91eaa5-0ca4-4703-941f-d4b008c28620", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "format": "json"}]: dispatch
Nov 22 05:56:41 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e38d87a-ae0f-4d08-9b46-1181605e24ce", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mon[75840]: pgmap v1096: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s wr, 4 op/s
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta.tmp'
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta.tmp' to config b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4/.meta'
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta.tmp'
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta.tmp' to config b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/.meta'
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: pgmap v1097: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:56:43
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms']
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:56:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:56:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta.tmp'
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta.tmp' to config b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b/.meta'
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:45 compute-0 ceph-mon[75840]: pgmap v1098: 321 pgs: 321 active+clean; 59 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 53 KiB/s wr, 5 op/s
Nov 22 05:56:45 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:56:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:56:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:56:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:56:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_b91da8df-240a-407a-a34e-98bfc943cf90", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:46 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:46.976+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5465065f-2d60-4371-98a8-d41c3f15e3e4' of type subvolume
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5465065f-2d60-4371-98a8-d41c3f15e3e4' of type subvolume
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5465065f-2d60-4371-98a8-d41c3f15e3e4'' moved to trashcan
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5465065f-2d60-4371-98a8-d41c3f15e3e4, vol_name:cephfs) < ""
Nov 22 05:56:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:56:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:56:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: pgmap v1099: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 6 op/s
Nov 22 05:56:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "format": "json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5465065f-2d60-4371-98a8-d41c3f15e3e4", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:56:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/409705138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:56:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta.tmp'
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta.tmp' to config b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a/.meta'
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:49 compute-0 ceph-mon[75840]: pgmap v1100: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 51 KiB/s wr, 6 op/s
Nov 22 05:56:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:56:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:56:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c
Nov 22 05:56:50 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90/c958a687-734c-4723-975b-2d856dc5a38c],prefix=session evict} (starting...)
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b91da8df-240a-407a-a34e-98bfc943cf90, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b91da8df-240a-407a-a34e-98bfc943cf90, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:50.397+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b91da8df-240a-407a-a34e-98bfc943cf90' of type subvolume
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b91da8df-240a-407a-a34e-98bfc943cf90' of type subvolume
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b91da8df-240a-407a-a34e-98bfc943cf90'' moved to trashcan
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b91da8df-240a-407a-a34e-98bfc943cf90, vol_name:cephfs) < ""
Nov 22 05:56:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 8 op/s
Nov 22 05:56:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:51 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 05:56:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:56:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:56:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b91da8df-240a-407a-a34e-98bfc943cf90", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mon[75840]: pgmap v1101: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 KiB/s wr, 8 op/s
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:52 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:52.639+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1c1ce2-dc9b-48d6-be0f-cc790f23422a' of type subvolume
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1c1ce2-dc9b-48d6-be0f-cc790f23422a' of type subvolume
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff1c1ce2-dc9b-48d6-be0f-cc790f23422a'' moved to trashcan
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1c1ce2-dc9b-48d6-be0f-cc790f23422a, vol_name:cephfs) < ""
Nov 22 05:56:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00029092748827896074 of space, bias 4.0, pg target 0.3491129859347529 quantized to 16 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:56:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "format": "json"}]: dispatch
Nov 22 05:56:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1c1ce2-dc9b-48d6-be0f-cc790f23422a", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:53 compute-0 ceph-mon[75840]: pgmap v1102: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 47 KiB/s wr, 6 op/s
Nov 22 05:56:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta.tmp'
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta.tmp' to config b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/.meta'
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:56:53 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:56:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:56:53 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:54 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:54 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 05:56:54 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:56:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 46 KiB/s wr, 5 op/s
Nov 22 05:56:55 compute-0 ceph-mon[75840]: pgmap v1103: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 46 KiB/s wr, 5 op/s
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:56:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:56:56.105+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a73a7b16-fd1c-4116-9ec6-189608a7680b' of type subvolume
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a73a7b16-fd1c-4116-9ec6-189608a7680b' of type subvolume
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a73a7b16-fd1c-4116-9ec6-189608a7680b'' moved to trashcan
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a73a7b16-fd1c-4116-9ec6-189608a7680b, vol_name:cephfs) < ""
Nov 22 05:56:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 8 op/s
Nov 22 05:56:57 compute-0 podman[269651]: 2025-11-22 05:56:57.258961478 +0000 UTC m=+0.089592562 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 05:56:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:56:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:56:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:56:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:56:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a73a7b16-fd1c-4116-9ec6-189608a7680b", "force": true, "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mon[75840]: pgmap v1104: 321 pgs: 321 active+clean; 59 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 77 KiB/s wr, 8 op/s
Nov 22 05:56:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:56:57 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:56:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:56:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 22 05:56:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:56:59 compute-0 ceph-mon[75840]: pgmap v1105: 321 pgs: 321 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 22 05:56:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:56:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e6d675c7-bbaf-4177-8fb3-cadda9f6eea6, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 91 KiB/s wr, 8 op/s
Nov 22 05:57:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:57:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc
Nov 22 05:57:01 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774/b507e83f-d6c3-4469-82d6-9ef7401d92dc],prefix=session evict} (starting...)
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:803dbd72-541a-4f96-91c3-545ca7945362, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta.tmp'
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta.tmp' to config b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5/.meta'
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:01 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '194fe9d7-4252-42b8-9e5b-0b7a3e0b3774' of type subvolume
Nov 22 05:57:01 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:01.307+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '194fe9d7-4252-42b8-9e5b-0b7a3e0b3774' of type subvolume
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/194fe9d7-4252-42b8-9e5b-0b7a3e0b3774'' moved to trashcan
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:01 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:194fe9d7-4252-42b8-9e5b-0b7a3e0b3774, vol_name:cephfs) < ""
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e6d675c7-bbaf-4177-8fb3-cadda9f6eea6", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: pgmap v1106: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 91 KiB/s wr, 8 op/s
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:01 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 8 op/s
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "803dbd72-541a-4f96-91c3-545ca7945362", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "format": "json"}]: dispatch
Nov 22 05:57:02 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "194fe9d7-4252-42b8-9e5b-0b7a3e0b3774", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:03 compute-0 ceph-mon[75840]: pgmap v1107: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 57 KiB/s wr, 8 op/s
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:57:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:57:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta.tmp'
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta.tmp' to config b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea/.meta'
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:04 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 57 KiB/s wr, 6 op/s
Nov 22 05:57:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:04 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:04 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta.tmp'
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta.tmp' to config b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73/.meta'
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 05:57:05 compute-0 ceph-mon[75840]: pgmap v1108: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 57 KiB/s wr, 6 op/s
Nov 22 05:57:05 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 103 KiB/s wr, 10 op/s
Nov 22 05:57:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 05:57:07 compute-0 podman[269679]: 2025-11-22 05:57:07.221759306 +0000 UTC m=+0.077529148 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 05:57:07 compute-0 podman[269680]: 2025-11-22 05:57:07.253099466 +0000 UTC m=+0.098807869 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 05:57:07 compute-0 ceph-mon[75840]: pgmap v1109: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 103 KiB/s wr, 10 op/s
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:57:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 05:57:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1a737af8-04c5-43cf-b788-696f3029c8ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1a737af8-04c5-43cf-b788-696f3029c8ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:08.423+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a737af8-04c5-43cf-b788-696f3029c8ea' of type subvolume
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1a737af8-04c5-43cf-b788-696f3029c8ea' of type subvolume
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1a737af8-04c5-43cf-b788-696f3029c8ea'' moved to trashcan
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1a737af8-04c5-43cf-b788-696f3029c8ea, vol_name:cephfs) < ""
Nov 22 05:57:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 05:57:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta.tmp'
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta.tmp' to config b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851/.meta'
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1a737af8-04c5-43cf-b788-696f3029c8ea", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:09 compute-0 ceph-mon[75840]: pgmap v1110: 321 pgs: 321 active+clean; 60 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 73 KiB/s wr, 8 op/s
Nov 22 05:57:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 109 KiB/s wr, 9 op/s
Nov 22 05:57:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:57:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:57:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta.tmp'
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta.tmp' to config b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4/.meta'
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:12 compute-0 ceph-mon[75840]: pgmap v1111: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 109 KiB/s wr, 9 op/s
Nov 22 05:57:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:12 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 9 op/s
Nov 22 05:57:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta.tmp'
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta.tmp' to config b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd/.meta'
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:14 compute-0 ceph-mon[75840]: pgmap v1112: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 KiB/s wr, 9 op/s
Nov 22 05:57:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 05:57:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:14 compute-0 sudo[269719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:14 compute-0 sudo[269719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:14 compute-0 sudo[269719]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:14 compute-0 sudo[269744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:57:14 compute-0 sudo[269744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:14 compute-0 sudo[269744]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:14 compute-0 sudo[269769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:14 compute-0 sudo[269769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:14 compute-0 sudo[269769]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:14 compute-0 sudo[269794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:57:14 compute-0 sudo[269794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 84 KiB/s wr, 8 op/s
Nov 22 05:57:15 compute-0 ceph-mon[75840]: pgmap v1113: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 84 KiB/s wr, 8 op/s
Nov 22 05:57:15 compute-0 sudo[269794]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.156 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.157 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4c4c6774-164a-465a-9d86-6647f907b45e does not exist
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev dea47322-d282-4fd0-9004-95a64b6fd500 does not exist
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev ad0208a5-a914-4bc8-a269-21df8dfdf088 does not exist
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:15 compute-0 sudo[269852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:15 compute-0 sudo[269852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:15 compute-0 sudo[269852]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 05:57:15 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 sudo[269897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:57:15 compute-0 sudo[269897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:15 compute-0 sudo[269897]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:15 compute-0 sudo[269922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:15 compute-0 sudo[269922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:15 compute-0 sudo[269922]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:31743d3b-c309-45ea-a481-74ddccf572f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:31743d3b-c309-45ea-a481-74ddccf572f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:15.523+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '31743d3b-c309-45ea-a481-74ddccf572f4' of type subvolume
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '31743d3b-c309-45ea-a481-74ddccf572f4' of type subvolume
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/31743d3b-c309-45ea-a481-74ddccf572f4'' moved to trashcan
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:15 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:31743d3b-c309-45ea-a481-74ddccf572f4, vol_name:cephfs) < ""
Nov 22 05:57:15 compute-0 sudo[269947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:57:15 compute-0 sudo[269947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:57:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595235561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.620 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.777 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.778 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.854 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.854 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:57:15 compute-0 nova_compute[255660]: 2025-11-22 05:57:15.879 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:57:15 compute-0 podman[270016]: 2025-11-22 05:57:15.961818778 +0000 UTC m=+0.052932199 container create e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:15 compute-0 systemd[1]: Started libpod-conmon-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope.
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:15.937011853 +0000 UTC m=+0.028125314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:16.055136588 +0000 UTC m=+0.146250029 container init e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "31743d3b-c309-45ea-a481-74ddccf572f4", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2595235561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:16.062319971 +0000 UTC m=+0.153433382 container start e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:16.066461982 +0000 UTC m=+0.157575423 container attach e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:57:16 compute-0 quizzical_goldberg[270050]: 167 167
Nov 22 05:57:16 compute-0 systemd[1]: libpod-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope: Deactivated successfully.
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:16.070621713 +0000 UTC m=+0.161735134 container died e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-14f91019d298f1fe8079ddd01b23674bb94c2fffd933e6eb0d28549cb86f673a-merged.mount: Deactivated successfully.
Nov 22 05:57:16 compute-0 podman[270016]: 2025-11-22 05:57:16.117039447 +0000 UTC m=+0.208152858 container remove e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:57:16 compute-0 systemd[1]: libpod-conmon-e859a0760336650dc5abd5cc694fdb9ee937d59891c45aeaa8e835444839e38a.scope: Deactivated successfully.
Nov 22 05:57:16 compute-0 podman[270075]: 2025-11-22 05:57:16.302947259 +0000 UTC m=+0.050822703 container create 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:57:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870850556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:57:16 compute-0 systemd[1]: Started libpod-conmon-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope.
Nov 22 05:57:16 compute-0 nova_compute[255660]: 2025-11-22 05:57:16.366 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:57:16 compute-0 podman[270075]: 2025-11-22 05:57:16.279057979 +0000 UTC m=+0.026933453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:16 compute-0 nova_compute[255660]: 2025-11-22 05:57:16.374 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:57:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:16 compute-0 nova_compute[255660]: 2025-11-22 05:57:16.390 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:57:16 compute-0 podman[270075]: 2025-11-22 05:57:16.391859482 +0000 UTC m=+0.139734936 container init 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 05:57:16 compute-0 nova_compute[255660]: 2025-11-22 05:57:16.393 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:57:16 compute-0 nova_compute[255660]: 2025-11-22 05:57:16.393 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:57:16 compute-0 podman[270075]: 2025-11-22 05:57:16.402835016 +0000 UTC m=+0.150710450 container start 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:57:16 compute-0 podman[270075]: 2025-11-22 05:57:16.408459077 +0000 UTC m=+0.156334541 container attach 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:57:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 116 KiB/s wr, 29 op/s
Nov 22 05:57:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2870850556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:57:17 compute-0 ceph-mon[75840]: pgmap v1114: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 116 KiB/s wr, 29 op/s
Nov 22 05:57:17 compute-0 friendly_hamilton[270094]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:57:17 compute-0 friendly_hamilton[270094]: --> relative data size: 1.0
Nov 22 05:57:17 compute-0 friendly_hamilton[270094]: --> All data devices are unavailable
Nov 22 05:57:17 compute-0 systemd[1]: libpod-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Deactivated successfully.
Nov 22 05:57:17 compute-0 podman[270075]: 2025-11-22 05:57:17.519168079 +0000 UTC m=+1.267043553 container died 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 05:57:17 compute-0 systemd[1]: libpod-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Consumed 1.060s CPU time.
Nov 22 05:57:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-542725e5586258ad41f3d6dfb8bce7d9601b07da00c5639c1eef29cd262c6e91-merged.mount: Deactivated successfully.
Nov 22 05:57:17 compute-0 podman[270075]: 2025-11-22 05:57:17.599929193 +0000 UTC m=+1.347804657 container remove 969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hamilton, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 05:57:17 compute-0 systemd[1]: libpod-conmon-969d3eb7b32224c82e57d05d9275926664d7e209fa63608ec654b6d198843cce.scope: Deactivated successfully.
Nov 22 05:57:17 compute-0 sudo[269947]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:17 compute-0 sudo[270134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:17 compute-0 sudo[270134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:17 compute-0 sudo[270134]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:17 compute-0 sudo[270159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:57:17 compute-0 sudo[270159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:17 compute-0 sudo[270159]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:17 compute-0 sudo[270184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:17 compute-0 sudo[270184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:17 compute-0 sudo[270184]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:18 compute-0 sudo[270209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:57:18 compute-0 sudo[270209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:18.076+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd' of type subvolume
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd' of type subvolume
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd'' moved to trashcan
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.46077268 +0000 UTC m=+0.047146254 container create 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:57:18 compute-0 systemd[1]: Started libpod-conmon-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope.
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.442030818 +0000 UTC m=+0.028404472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.566552974 +0000 UTC m=+0.152926628 container init 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.57946534 +0000 UTC m=+0.165838934 container start 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.584019242 +0000 UTC m=+0.170392916 container attach 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 05:57:18 compute-0 boring_faraday[270291]: 167 167
Nov 22 05:57:18 compute-0 systemd[1]: libpod-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope: Deactivated successfully.
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.586953011 +0000 UTC m=+0.173326595 container died 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 05:57:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5eb75ba97bb0029f4bb284ee2b49bba14268927aef415dae37f76a5f82aa6bd-merged.mount: Deactivated successfully.
Nov 22 05:57:18 compute-0 podman[270275]: 2025-11-22 05:57:18.627393264 +0000 UTC m=+0.213766828 container remove 86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 systemd[1]: libpod-conmon-86d304c6acb30e50794b3791beabdd56be18902f47a3e764f055f753f8c49005.scope: Deactivated successfully.
Nov 22 05:57:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:57:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:57:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:18 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:18 compute-0 podman[270315]: 2025-11-22 05:57:18.828253787 +0000 UTC m=+0.055302524 container create 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:57:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 73 KiB/s wr, 45 op/s
Nov 22 05:57:18 compute-0 systemd[1]: Started libpod-conmon-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope.
Nov 22 05:57:18 compute-0 podman[270315]: 2025-11-22 05:57:18.799165807 +0000 UTC m=+0.026214604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:18 compute-0 podman[270315]: 2025-11-22 05:57:18.932352526 +0000 UTC m=+0.159401313 container init 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:57:18 compute-0 podman[270315]: 2025-11-22 05:57:18.947902952 +0000 UTC m=+0.174951679 container start 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:57:18 compute-0 podman[270315]: 2025-11-22 05:57:18.95264398 +0000 UTC m=+0.179692707 container attach 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 05:57:19 compute-0 hardcore_nash[270332]: {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     "0": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "devices": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "/dev/loop3"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             ],
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_name": "ceph_lv0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_size": "21470642176",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "name": "ceph_lv0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "tags": {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_name": "ceph",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.crush_device_class": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.encrypted": "0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_id": "0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.vdo": "0"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             },
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "vg_name": "ceph_vg0"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         }
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     ],
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     "1": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "devices": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "/dev/loop4"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             ],
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_name": "ceph_lv1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_size": "21470642176",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "name": "ceph_lv1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "tags": {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_name": "ceph",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.crush_device_class": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.encrypted": "0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_id": "1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.vdo": "0"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             },
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "vg_name": "ceph_vg1"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         }
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     ],
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     "2": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "devices": [
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "/dev/loop5"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             ],
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_name": "ceph_lv2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_size": "21470642176",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "name": "ceph_lv2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "tags": {
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.cluster_name": "ceph",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.crush_device_class": "",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.encrypted": "0",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osd_id": "2",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:                 "ceph.vdo": "0"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             },
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "type": "block",
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:             "vg_name": "ceph_vg2"
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:         }
Nov 22 05:57:19 compute-0 hardcore_nash[270332]:     ]
Nov 22 05:57:19 compute-0 hardcore_nash[270332]: }
Nov 22 05:57:19 compute-0 systemd[1]: libpod-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope: Deactivated successfully.
Nov 22 05:57:19 compute-0 podman[270315]: 2025-11-22 05:57:19.686651109 +0000 UTC m=+0.913699816 container died 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:57:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "format": "json"}]: dispatch
Nov 22 05:57:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ce7ccf0-07b6-4c4b-aace-04a9aa7606fd", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:19 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:19 compute-0 ceph-mon[75840]: pgmap v1115: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 73 KiB/s wr, 45 op/s
Nov 22 05:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8895c8e0d7793204a66b280629ea43bdf0a21fc1262c74fd5f2cdd9de99b36c-merged.mount: Deactivated successfully.
Nov 22 05:57:19 compute-0 podman[270315]: 2025-11-22 05:57:19.754090716 +0000 UTC m=+0.981139413 container remove 85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:57:19 compute-0 systemd[1]: libpod-conmon-85918d2bffe2a170b59045cf960013d2df9320791cb3845b505558a05ec5ff8a.scope: Deactivated successfully.
Nov 22 05:57:19 compute-0 sudo[270209]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:19 compute-0 sudo[270355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:19 compute-0 sudo[270355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:19 compute-0 sudo[270355]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:19 compute-0 sudo[270380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:57:19 compute-0 sudo[270380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:19 compute-0 sudo[270380]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:19 compute-0 sudo[270405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:19 compute-0 sudo[270405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:19 compute-0 sudo[270405]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:20 compute-0 sudo[270430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:57:20 compute-0 sudo[270430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.39988976 +0000 UTC m=+0.055882238 container create e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 05:57:20 compute-0 systemd[1]: Started libpod-conmon-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope.
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.372203959 +0000 UTC m=+0.028196487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.493902799 +0000 UTC m=+0.149895287 container init e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.504060332 +0000 UTC m=+0.160052800 container start e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.508215964 +0000 UTC m=+0.164208502 container attach e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 05:57:20 compute-0 goofy_taussig[270512]: 167 167
Nov 22 05:57:20 compute-0 systemd[1]: libpod-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope: Deactivated successfully.
Nov 22 05:57:20 compute-0 conmon[270512]: conmon e02c124e88603d477e89 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope/container/memory.events
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.513015422 +0000 UTC m=+0.169007890 container died e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f667201e2f4437faab8a71dbf4c982daf55fcb5ce70bc0ab21923b13ac0e2f-merged.mount: Deactivated successfully.
Nov 22 05:57:20 compute-0 podman[270496]: 2025-11-22 05:57:20.56294253 +0000 UTC m=+0.218934978 container remove e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_taussig, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:57:20 compute-0 systemd[1]: libpod-conmon-e02c124e88603d477e89005b9ad182407352329a63959f28d890d215eeef0a3b.scope: Deactivated successfully.
Nov 22 05:57:20 compute-0 podman[270536]: 2025-11-22 05:57:20.756950358 +0000 UTC m=+0.055691402 container create 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:57:20 compute-0 systemd[1]: Started libpod-conmon-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope.
Nov 22 05:57:20 compute-0 podman[270536]: 2025-11-22 05:57:20.74023879 +0000 UTC m=+0.038979864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:57:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:57:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 96 KiB/s wr, 68 op/s
Nov 22 05:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:57:20 compute-0 podman[270536]: 2025-11-22 05:57:20.857997707 +0000 UTC m=+0.156738831 container init 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 05:57:20 compute-0 podman[270536]: 2025-11-22 05:57:20.871153349 +0000 UTC m=+0.169894433 container start 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:57:20 compute-0 podman[270536]: 2025-11-22 05:57:20.876467412 +0000 UTC m=+0.175208486 container attach 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:57:21 compute-0 nova_compute[255660]: 2025-11-22 05:57:21.395 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:21 compute-0 nova_compute[255660]: 2025-11-22 05:57:21.420 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:21 compute-0 nova_compute[255660]: 2025-11-22 05:57:21.420 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:21 compute-0 nova_compute[255660]: 2025-11-22 05:57:21.421 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:21 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:21.732+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4fc8a89-bd59-4c0a-8c81-0de4fa453851' of type subvolume
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c4fc8a89-bd59-4c0a-8c81-0de4fa453851' of type subvolume
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4fc8a89-bd59-4c0a-8c81-0de4fa453851'' moved to trashcan
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4fc8a89-bd59-4c0a-8c81-0de4fa453851, vol_name:cephfs) < ""
Nov 22 05:57:21 compute-0 ceph-mon[75840]: pgmap v1116: 321 pgs: 321 active+clean; 61 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 96 KiB/s wr, 68 op/s
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]: {
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_id": 1,
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "type": "bluestore"
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     },
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_id": 2,
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "type": "bluestore"
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     },
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_id": 0,
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:         "type": "bluestore"
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]:     }
Nov 22 05:57:21 compute-0 bold_ishizaka[270553]: }
Nov 22 05:57:21 compute-0 systemd[1]: libpod-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Deactivated successfully.
Nov 22 05:57:21 compute-0 systemd[1]: libpod-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Consumed 1.079s CPU time.
Nov 22 05:57:21 compute-0 podman[270536]: 2025-11-22 05:57:21.946176866 +0000 UTC m=+1.244917940 container died 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:57:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e151e7d67b5a7dc3c63881c3373961f44d0dfd13546baee5b446183f91edd06-merged.mount: Deactivated successfully.
Nov 22 05:57:22 compute-0 podman[270536]: 2025-11-22 05:57:22.008884226 +0000 UTC m=+1.307625260 container remove 6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:57:22 compute-0 systemd[1]: libpod-conmon-6c6f7d68dc195cd692477671b0e468c823a5048bceb4bd1c0b4d9c09b2ac4295.scope: Deactivated successfully.
Nov 22 05:57:22 compute-0 sudo[270430]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:57:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:57:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c67e547e-853b-4d9a-8ec6-3ee419c38d4c does not exist
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c969fc21-4322-474a-b177-448c2cb22dc7 does not exist
Nov 22 05:57:22 compute-0 sudo[270601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:57:22 compute-0 sudo[270601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:22 compute-0 sudo[270601]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:22 compute-0 sudo[270626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:57:22 compute-0 sudo[270626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:57:22 compute-0 sudo[270626]: pam_unix(sudo:session): session closed for user root
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:57:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 05:57:22 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 100 KiB/s wr, 70 op/s
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4fc8a89-bd59-4c0a-8c81-0de4fa453851", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:22 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:23 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:23 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:23 compute-0 ceph-mon[75840]: pgmap v1117: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 100 KiB/s wr, 70 op/s
Nov 22 05:57:24 compute-0 nova_compute[255660]: 2025-11-22 05:57:24.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:24 compute-0 nova_compute[255660]: 2025-11-22 05:57:24.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 05:57:25 compute-0 nova_compute[255660]: 2025-11-22 05:57:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:25 compute-0 nova_compute[255660]: 2025-11-22 05:57:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70f361da-7ed0-4639-8730-40afb694cc73, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70f361da-7ed0-4639-8730-40afb694cc73, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:25.517+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f361da-7ed0-4639-8730-40afb694cc73' of type subvolume
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f361da-7ed0-4639-8730-40afb694cc73' of type subvolume
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70f361da-7ed0-4639-8730-40afb694cc73'' moved to trashcan
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f361da-7ed0-4639-8730-40afb694cc73, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: Creating meta for ID tempest-cephx-id-1175252805 with tenant db75a1944ad845ea9c7d9708d52f1e25
Nov 22 05:57:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 22 05:57:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume authorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, tenant_id:db75a1944ad845ea9c7d9708d52f1e25, vol_name:cephfs) < ""
Nov 22 05:57:25 compute-0 ceph-mon[75840]: pgmap v1118: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 05:57:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 22 05:57:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1175252805", "caps": ["mds", "allow rw path=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_9842ed03-cc84-4abe-85fe-b6107828690f", "mon", "allow r"], "format": "json"}]': finished
Nov 22 05:57:26 compute-0 nova_compute[255660]: 2025-11-22 05:57:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 05:57:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f361da-7ed0-4639-8730-40afb694cc73", "format": "json"}]: dispatch
Nov 22 05:57:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f361da-7ed0-4639-8730-40afb694cc73", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "tenant_id": "db75a1944ad845ea9c7d9708d52f1e25", "access_level": "rw", "format": "json"}]: dispatch
Nov 22 05:57:27 compute-0 nova_compute[255660]: 2025-11-22 05:57:27.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:57:27 compute-0 nova_compute[255660]: 2025-11-22 05:57:27.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:57:27 compute-0 nova_compute[255660]: 2025-11-22 05:57:27.132 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:57:27 compute-0 nova_compute[255660]: 2025-11-22 05:57:27.348 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:57:28 compute-0 ceph-mon[75840]: pgmap v1119: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 99 KiB/s wr, 69 op/s
Nov 22 05:57:28 compute-0 podman[270652]: 2025-11-22 05:57:28.247510418 +0000 UTC m=+0.096679031 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 95 KiB/s wr, 52 op/s
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:29.114+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41dc1a04-6ad1-4773-8daa-7038ec6071c5' of type subvolume
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '41dc1a04-6ad1-4773-8daa-7038ec6071c5' of type subvolume
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/41dc1a04-6ad1-4773-8daa-7038ec6071c5'' moved to trashcan
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:41dc1a04-6ad1-4773-8daa-7038ec6071c5, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mon[75840]: pgmap v1120: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 95 KiB/s wr, 52 op/s
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"} v 0) v1
Nov 22 05:57:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"} v 0) v1
Nov 22 05:57:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume deauthorize, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1175252805, client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3
Nov 22 05:57:29 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session evict {filters=[auth_name=tempest-cephx-id-1175252805,client_metadata.root=/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f/cd47ea4e-104b-41a1-a49e-3bbe887870b3],prefix=session evict} (starting...)
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 22 05:57:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1175252805, format:json, prefix:fs subvolume evict, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "format": "json"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "41dc1a04-6ad1-4773-8daa-7038ec6071c5", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1175252805"}]': finished
Nov 22 05:57:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "auth_id": "tempest-cephx-id-1175252805", "format": "json"}]: dispatch
Nov 22 05:57:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 92 KiB/s wr, 32 op/s
Nov 22 05:57:31 compute-0 ceph-mon[75840]: pgmap v1121: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 92 KiB/s wr, 32 op/s
Nov 22 05:57:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "format": "json"}]: dispatch
Nov 22 05:57:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "format": "json"}]: dispatch
Nov 22 05:57:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 95 KiB/s wr, 11 op/s
Nov 22 05:57:33 compute-0 ceph-mon[75840]: pgmap v1122: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 95 KiB/s wr, 11 op/s
Nov 22 05:57:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9842ed03-cc84-4abe-85fe-b6107828690f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9842ed03-cc84-4abe-85fe-b6107828690f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:34.406+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9842ed03-cc84-4abe-85fe-b6107828690f' of type subvolume
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9842ed03-cc84-4abe-85fe-b6107828690f' of type subvolume
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9842ed03-cc84-4abe-85fe-b6107828690f'' moved to trashcan
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9842ed03-cc84-4abe-85fe-b6107828690f, vol_name:cephfs) < ""
Nov 22 05:57:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp'
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta.tmp' to config b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1/.meta'
Nov 22 05:57:35 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:ed5a4ca5-3621-4605-bdac-e3cb3da09c16, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:35 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:35.514 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:57:35 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:35.515 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:57:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "format": "json"}]: dispatch
Nov 22 05:57:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9842ed03-cc84-4abe-85fe-b6107828690f", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:35 compute-0 ceph-mon[75840]: pgmap v1123: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:57:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:57:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16_e71660f7-dae4-4a03-8eba-3b01c731a81b", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:36 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "snap_name": "ed5a4ca5-3621-4605-bdac-e3cb3da09c16", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:57:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:57:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:36.937 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:57:37 compute-0 ceph-mon[75840]: pgmap v1124: 321 pgs: 321 active+clean; 62 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:57:38 compute-0 podman[270681]: 2025-11-22 05:57:38.260216503 +0000 UTC m=+0.097111842 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:57:38 compute-0 podman[270680]: 2025-11-22 05:57:38.260742298 +0000 UTC m=+0.108433597 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 05:57:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:39 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:39.022+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1465b04-fd21-4d9f-bc84-54d95bef1ba1' of type subvolume
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c1465b04-fd21-4d9f-bc84-54d95bef1ba1' of type subvolume
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c1465b04-fd21-4d9f-bc84-54d95bef1ba1'' moved to trashcan
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:39 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c1465b04-fd21-4d9f-bc84-54d95bef1ba1, vol_name:cephfs) < ""
Nov 22 05:57:39 compute-0 ceph-mon[75840]: pgmap v1125: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:57:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "format": "json"}]: dispatch
Nov 22 05:57:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c1465b04-fd21-4d9f-bc84-54d95bef1ba1", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 56 KiB/s wr, 6 op/s
Nov 22 05:57:41 compute-0 ceph-mon[75840]: pgmap v1126: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 56 KiB/s wr, 6 op/s
Nov 22 05:57:42 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:57:42.518 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta.tmp'
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta.tmp' to config b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea/.meta'
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 22 05:57:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 22 05:57:42 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 22 05:57:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:43 compute-0 sshd-session[270716]: Invalid user solana from 80.94.92.166 port 49100
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:57:43
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images']
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:57:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:57:43 compute-0 sshd-session[270716]: Connection closed by invalid user solana 80.94.92.166 port 49100 [preauth]
Nov 22 05:57:43 compute-0 ceph-mon[75840]: pgmap v1127: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 64 KiB/s wr, 7 op/s
Nov 22 05:57:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:43 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 05:57:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:43 compute-0 ceph-mon[75840]: osdmap e149: 3 total, 3 up, 3 in
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:57:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 05:57:45 compute-0 ceph-mon[75840]: pgmap v1129: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 05:57:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 05:57:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:57:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:57:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:57:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7ea7dcce-e673-4744-867d-5b02c225beea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7ea7dcce-e673-4744-867d-5b02c225beea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ea7dcce-e673-4744-867d-5b02c225beea' of type subvolume
Nov 22 05:57:47 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:47.353+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7ea7dcce-e673-4744-867d-5b02c225beea' of type subvolume
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7ea7dcce-e673-4744-867d-5b02c225beea'' moved to trashcan
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7ea7dcce-e673-4744-867d-5b02c225beea, vol_name:cephfs) < ""
Nov 22 05:57:48 compute-0 ceph-mon[75840]: pgmap v1130: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 44 KiB/s wr, 5 op/s
Nov 22 05:57:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:57:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2417489919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:57:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:57:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "format": "json"}]: dispatch
Nov 22 05:57:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7ea7dcce-e673-4744-867d-5b02c225beea", "force": true, "format": "json"}]: dispatch
Nov 22 05:57:50 compute-0 ceph-mon[75840]: pgmap v1131: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:57:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:51 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "format": "json"}]: dispatch
Nov 22 05:57:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:57:52 compute-0 ceph-mon[75840]: pgmap v1132: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 26 KiB/s wr, 2 op/s
Nov 22 05:57:52 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 05:57:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "format": "json"}]: dispatch
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000356807461581038 of space, bias 4.0, pg target 0.4281689538972456 quantized to 16 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:57:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:57:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 22 05:57:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 22 05:57:53 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 22 05:57:54 compute-0 ceph-mon[75840]: pgmap v1133: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:54 compute-0 ceph-mon[75840]: osdmap e150: 3 total, 3 up, 3 in
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:54 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:55 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 05:57:55 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:57:55 compute-0 ceph-mon[75840]: pgmap v1135: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "target_sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, target_sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 0fea4068-9bc1-4fcb-8da7-4bb427ba3d62 for path b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9'
Nov 22 05:57:56 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "target_sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, target_sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.334+0000 7f533db69640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9)
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:57:56.390+0000 7f533e36a640 -1 client.0 error registering admin socket command: (17) File exists
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9) -- by 0 seconds
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 05:57:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:57 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:57:57 compute-0 ceph-mon[75840]: pgmap v1136: 321 pgs: 321 active+clean; 63 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:57:57 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.mscchl(active, since 33m)
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.snap/396d061d-06cf-48da-a32e-5cf66e8782c8/e6393510-65ec-437b-80ea-ea4e82cad1d5' to b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/2d2bb832-daca-4d58-88f8-adb59d3125a8'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 0fea4068-9bc1-4fcb-8da7-4bb427ba3d62
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta.tmp' to config b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9/.meta'
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 3d00e64c-c6bd-4014-9d75-6c2c64f0dda9)
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "format": "json"}]: dispatch
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:57:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:57:58 compute-0 ceph-mon[75840]: mgrmap e14: compute-0.mscchl(active, since 33m)
Nov 22 05:57:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "format": "json"}]: dispatch
Nov 22 05:57:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 6 op/s
Nov 22 05:57:59 compute-0 podman[270742]: 2025-11-22 05:57:59.265774198 +0000 UTC m=+0.127952930 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:57:59 compute-0 ceph-mon[75840]: pgmap v1137: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 6 op/s
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta.tmp'
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta.tmp' to config b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca/.meta'
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:57:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:57:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:57:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:00 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 7 op/s
Nov 22 05:58:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 05:58:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:01 compute-0 ceph-mon[75840]: pgmap v1138: 321 pgs: 321 active+clean; 64 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 7 op/s
Nov 22 05:58:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:58:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:58:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.499591) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082499615, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1510, "num_deletes": 252, "total_data_size": 1928815, "memory_usage": 1957088, "flush_reason": "Manual Compaction"}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082510011, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1907240, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24592, "largest_seqno": 26101, "table_properties": {"data_size": 1900094, "index_size": 3964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17388, "raw_average_key_size": 21, "raw_value_size": 1884814, "raw_average_value_size": 2281, "num_data_blocks": 177, "num_entries": 826, "num_filter_entries": 826, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763790979, "oldest_key_time": 1763790979, "file_creation_time": 1763791082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 10458 microseconds, and 4663 cpu microseconds.
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.510047) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1907240 bytes OK
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.510062) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511614) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511624) EVENT_LOG_v1 {"time_micros": 1763791082511621, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.511639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1921689, prev total WAL file size 1921689, number of live WAL files 2.
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.512291) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1862KB)], [56(9343KB)]
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082512357, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 11474727, "oldest_snapshot_seqno": -1}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5523 keys, 9760514 bytes, temperature: kUnknown
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082559753, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 9760514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9720243, "index_size": 25377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 137488, "raw_average_key_size": 24, "raw_value_size": 9617867, "raw_average_value_size": 1741, "num_data_blocks": 1056, "num_entries": 5523, "num_filter_entries": 5523, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791082, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.560097) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9760514 bytes
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.561807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.5 rd, 205.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.1 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.1) write-amplify(5.1) OK, records in: 6050, records dropped: 527 output_compression: NoCompression
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.561845) EVENT_LOG_v1 {"time_micros": 1763791082561828, "job": 30, "event": "compaction_finished", "compaction_time_micros": 47511, "compaction_time_cpu_micros": 22242, "output_level": 6, "num_output_files": 1, "total_output_size": 9760514, "num_input_records": 6050, "num_output_records": 5523, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082562768, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791082566307, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.512180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-05:58:02.566415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 05:58:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 64 KiB/s wr, 8 op/s
Nov 22 05:58:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:03 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:03 compute-0 ceph-mon[75840]: pgmap v1139: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 64 KiB/s wr, 8 op/s
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "format": "json"}]: dispatch
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72cb8e63-63dd-4239-8be6-4c1b98b626ca' of type subvolume
Nov 22 05:58:03 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:03.679+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '72cb8e63-63dd-4239-8be6-4c1b98b626ca' of type subvolume
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/72cb8e63-63dd-4239-8be6-4c1b98b626ca'' moved to trashcan
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:72cb8e63-63dd-4239-8be6-4c1b98b626ca, vol_name:cephfs) < ""
Nov 22 05:58:04 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "format": "json"}]: dispatch
Nov 22 05:58:04 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "format": "json"}]: dispatch
Nov 22 05:58:04 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "72cb8e63-63dd-4239-8be6-4c1b98b626ca", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 533 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:58:05 compute-0 ceph-mon[75840]: pgmap v1140: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 533 B/s rd, 55 KiB/s wr, 7 op/s
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d00e64c-c6bd-4014-9d75-6c2c64f0dda9'' moved to trashcan
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d00e64c-c6bd-4014-9d75-6c2c64f0dda9, vol_name:cephfs) < ""
Nov 22 05:58:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 53 KiB/s wr, 7 op/s
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta.tmp'
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta.tmp' to config b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319/.meta'
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:07 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "format": "json"}]: dispatch
Nov 22 05:58:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d00e64c-c6bd-4014-9d75-6c2c64f0dda9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:07 compute-0 ceph-mon[75840]: pgmap v1141: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 53 KiB/s wr, 7 op/s
Nov 22 05:58:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp'
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta.tmp' to config b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63/.meta'
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2ac4478c-6307-4031-b8c6-c0bc836a8aab, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:58:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:08 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 05:58:09 compute-0 podman[270771]: 2025-11-22 05:58:09.197898822 +0000 UTC m=+0.053195567 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:58:09 compute-0 podman[270770]: 2025-11-22 05:58:09.220835327 +0000 UTC m=+0.081488394 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp'
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta.tmp' to config b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a/.meta'
Nov 22 05:58:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:396d061d-06cf-48da-a32e-5cf66e8782c8, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab_ff3e7232-40bd-4efc-8a3f-80318631d2e5", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "snap_name": "2ac4478c-6307-4031-b8c6-c0bc836a8aab", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:09 compute-0 ceph-mon[75840]: pgmap v1142: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 83 KiB/s wr, 9 op/s
Nov 22 05:58:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 49 KiB/s wr, 5 op/s
Nov 22 05:58:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8_6932ae0a-642f-4769-a012-4989a1eed830", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "snap_name": "396d061d-06cf-48da-a32e-5cf66e8782c8", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f93c12a6-84cd-4937-a909-48f837e88319, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f93c12a6-84cd-4937-a909-48f837e88319, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f93c12a6-84cd-4937-a909-48f837e88319' of type subvolume
Nov 22 05:58:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:11.348+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f93c12a6-84cd-4937-a909-48f837e88319' of type subvolume
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f93c12a6-84cd-4937-a909-48f837e88319'' moved to trashcan
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f93c12a6-84cd-4937-a909-48f837e88319, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63' of type subvolume
Nov 22 05:58:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:11.754+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63' of type subvolume
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63'' moved to trashcan
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63, vol_name:cephfs) < ""
Nov 22 05:58:11 compute-0 ceph-mon[75840]: pgmap v1143: 321 pgs: 321 active+clean; 64 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 49 KiB/s wr, 5 op/s
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 86 KiB/s wr, 9 op/s
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3dcff5a7-454d-46f7-9ff8-546a79d1c07a' of type subvolume
Nov 22 05:58:12 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:12.903+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3dcff5a7-454d-46f7-9ff8-546a79d1c07a' of type subvolume
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3dcff5a7-454d-46f7-9ff8-546a79d1c07a'' moved to trashcan
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3dcff5a7-454d-46f7-9ff8-546a79d1c07a, vol_name:cephfs) < ""
Nov 22 05:58:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 22 05:58:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 22 05:58:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f93c12a6-84cd-4937-a909-48f837e88319", "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f93c12a6-84cd-4937-a909-48f837e88319", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9ed15f-ab31-45b4-b3cb-9a2d46be0d63", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:12 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 22 05:58:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:13 compute-0 ceph-mon[75840]: pgmap v1144: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 86 KiB/s wr, 9 op/s
Nov 22 05:58:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "format": "json"}]: dispatch
Nov 22 05:58:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3dcff5a7-454d-46f7-9ff8-546a79d1c07a", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:13 compute-0 ceph-mon[75840]: osdmap e151: 3 total, 3 up, 3 in
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta.tmp'
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta.tmp' to config b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001/.meta'
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.176 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.177 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.178 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.178 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:58:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:58:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1261161903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.610 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.781 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.783 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5062MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.783 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.784 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.856 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.856 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:58:15 compute-0 nova_compute[255660]: 2025-11-22 05:58:15.880 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:58:15 compute-0 ceph-mon[75840]: pgmap v1146: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 05:58:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1261161903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:58:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:58:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458841063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:58:16 compute-0 nova_compute[255660]: 2025-11-22 05:58:16.330 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:58:16 compute-0 nova_compute[255660]: 2025-11-22 05:58:16.337 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:58:16 compute-0 nova_compute[255660]: 2025-11-22 05:58:16.355 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:58:16 compute-0 nova_compute[255660]: 2025-11-22 05:58:16.357 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:58:16 compute-0 nova_compute[255660]: 2025-11-22 05:58:16.358 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:58:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2458841063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:58:18 compute-0 ceph-mon[75840]: pgmap v1147: 321 pgs: 321 active+clean; 65 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a58dddbc-e4f6-44cf-84c7-f24633017001, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a58dddbc-e4f6-44cf-84c7-f24633017001, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a58dddbc-e4f6-44cf-84c7-f24633017001' of type subvolume
Nov 22 05:58:18 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:18.740+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a58dddbc-e4f6-44cf-84c7-f24633017001' of type subvolume
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a58dddbc-e4f6-44cf-84c7-f24633017001'' moved to trashcan
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a58dddbc-e4f6-44cf-84c7-f24633017001, vol_name:cephfs) < ""
Nov 22 05:58:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 8 op/s
Nov 22 05:58:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "format": "json"}]: dispatch
Nov 22 05:58:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a58dddbc-e4f6-44cf-84c7-f24633017001", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:20 compute-0 ceph-mon[75840]: pgmap v1148: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 8 op/s
Nov 22 05:58:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 7 op/s
Nov 22 05:58:21 compute-0 nova_compute[255660]: 2025-11-22 05:58:21.359 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:22 compute-0 ceph-mon[75840]: pgmap v1149: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 75 KiB/s wr, 7 op/s
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta.tmp'
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta.tmp' to config b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde/.meta'
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:22 compute-0 sudo[270852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:22 compute-0 sudo[270852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:22 compute-0 sudo[270852]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:22 compute-0 sudo[270877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:58:22 compute-0 sudo[270877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:22 compute-0 sudo[270877]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:22 compute-0 sudo[270902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:22 compute-0 sudo[270902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:22 compute-0 sudo[270902]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:22 compute-0 sudo[270927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 05:58:22 compute-0 sudo[270927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 05:58:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:23 compute-0 nova_compute[255660]: 2025-11-22 05:58:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:23 compute-0 nova_compute[255660]: 2025-11-22 05:58:23.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:58:23 compute-0 podman[271027]: 2025-11-22 05:58:23.243155506 +0000 UTC m=+0.095601923 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:58:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 22 05:58:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 22 05:58:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 22 05:58:23 compute-0 podman[271027]: 2025-11-22 05:58:23.362502984 +0000 UTC m=+0.214949341 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:58:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:24 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 05:58:24 compute-0 ceph-mon[75840]: pgmap v1150: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 05:58:24 compute-0 ceph-mon[75840]: osdmap e152: 3 total, 3 up, 3 in
Nov 22 05:58:24 compute-0 nova_compute[255660]: 2025-11-22 05:58:24.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:24 compute-0 sudo[270927]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:58:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:58:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:24 compute-0 sudo[271188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:24 compute-0 sudo[271188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:24 compute-0 sudo[271188]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:24 compute-0 sudo[271213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:58:24 compute-0 sudo[271213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:24 compute-0 sudo[271213]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:24 compute-0 sudo[271238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:24 compute-0 sudo[271238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:24 compute-0 sudo[271238]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:24 compute-0 sudo[271263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:58:24 compute-0 sudo[271263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 05:58:25 compute-0 nova_compute[255660]: 2025-11-22 05:58:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:25 compute-0 sudo[271263]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:25 compute-0 ceph-mon[75840]: pgmap v1152: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 55 KiB/s wr, 5 op/s
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 07576dd4-5ba9-46c4-a21d-f1cca505943b does not exist
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 6d05f55d-355d-4418-ad8d-d66c7db25974 does not exist
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 960a49cd-547d-471a-9c39-183a460a66ae does not exist
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:58:25 compute-0 sudo[271317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:25 compute-0 sudo[271317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:25 compute-0 sudo[271317]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:25 compute-0 sudo[271342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:58:25 compute-0 sudo[271342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:25 compute-0 sudo[271342]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:25 compute-0 sudo[271367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:25 compute-0 sudo[271367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:25 compute-0 sudo[271367]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:25 compute-0 sudo[271392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:58:25 compute-0 sudo[271392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:25 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:25 compute-0 podman[271458]: 2025-11-22 05:58:25.996863335 +0000 UTC m=+0.065138396 container create 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 05:58:26 compute-0 systemd[1]: Started libpod-conmon-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope.
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:25.969829981 +0000 UTC m=+0.038105092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:26.112764171 +0000 UTC m=+0.181039282 container init 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:26.124295711 +0000 UTC m=+0.192570772 container start 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:26.128624847 +0000 UTC m=+0.196899918 container attach 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:58:26 compute-0 nova_compute[255660]: 2025-11-22 05:58:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:26 compute-0 nova_compute[255660]: 2025-11-22 05:58:26.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:26 compute-0 sad_jang[271474]: 167 167
Nov 22 05:58:26 compute-0 systemd[1]: libpod-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope: Deactivated successfully.
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:26.133306292 +0000 UTC m=+0.201581343 container died 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 05:58:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b32006c52d0c31ef2f75ce6e82b2b8ffbe5a4cee1ba3fdc1ceb3d42d5e208f-merged.mount: Deactivated successfully.
Nov 22 05:58:26 compute-0 podman[271458]: 2025-11-22 05:58:26.196413383 +0000 UTC m=+0.264688414 container remove 13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:58:26 compute-0 systemd[1]: libpod-conmon-13d6f4a019c384691b4cabbb58d7fa22a6de6bdd349c7b3445680cf57d590d33.scope: Deactivated successfully.
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:35291e42-2480-4994-b801-7fa345608cde, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:35291e42-2480-4994-b801-7fa345608cde, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35291e42-2480-4994-b801-7fa345608cde' of type subvolume
Nov 22 05:58:26 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:26.427+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '35291e42-2480-4994-b801-7fa345608cde' of type subvolume
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/35291e42-2480-4994-b801-7fa345608cde'' moved to trashcan
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:35291e42-2480-4994-b801-7fa345608cde, vol_name:cephfs) < ""
Nov 22 05:58:26 compute-0 podman[271498]: 2025-11-22 05:58:26.461574068 +0000 UTC m=+0.069929295 container create b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 05:58:26 compute-0 systemd[1]: Started libpod-conmon-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope.
Nov 22 05:58:26 compute-0 podman[271498]: 2025-11-22 05:58:26.430220078 +0000 UTC m=+0.038575335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:26 compute-0 podman[271498]: 2025-11-22 05:58:26.564061735 +0000 UTC m=+0.172417052 container init b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:58:26 compute-0 podman[271498]: 2025-11-22 05:58:26.581421179 +0000 UTC m=+0.189776426 container start b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 05:58:26 compute-0 podman[271498]: 2025-11-22 05:58:26.587074111 +0000 UTC m=+0.195429368 container attach b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:58:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 54 KiB/s wr, 4 op/s
Nov 22 05:58:27 compute-0 nova_compute[255660]: 2025-11-22 05:58:27.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "35291e42-2480-4994-b801-7fa345608cde", "format": "json"}]: dispatch
Nov 22 05:58:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "35291e42-2480-4994-b801-7fa345608cde", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:27 compute-0 ceph-mon[75840]: pgmap v1153: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 54 KiB/s wr, 4 op/s
Nov 22 05:58:27 compute-0 dreamy_pasteur[271514]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:58:27 compute-0 dreamy_pasteur[271514]: --> relative data size: 1.0
Nov 22 05:58:27 compute-0 dreamy_pasteur[271514]: --> All data devices are unavailable
Nov 22 05:58:27 compute-0 systemd[1]: libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Deactivated successfully.
Nov 22 05:58:27 compute-0 systemd[1]: libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Consumed 1.122s CPU time.
Nov 22 05:58:27 compute-0 conmon[271514]: conmon b91f3682e8518d49b133 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope/container/memory.events
Nov 22 05:58:27 compute-0 podman[271543]: 2025-11-22 05:58:27.807074182 +0000 UTC m=+0.033033905 container died b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b8d01a3dbcac408033ecfc596a3cf18b69f9aedd08b7257a2ec1b59a44d5a57-merged.mount: Deactivated successfully.
Nov 22 05:58:27 compute-0 podman[271543]: 2025-11-22 05:58:27.879719519 +0000 UTC m=+0.105679162 container remove b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 05:58:27 compute-0 systemd[1]: libpod-conmon-b91f3682e8518d49b133555b1acbafc626e2c0e2fc775a05bac19342407f9895.scope: Deactivated successfully.
Nov 22 05:58:27 compute-0 sudo[271392]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:28 compute-0 sudo[271558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:28 compute-0 sudo[271558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:28 compute-0 sudo[271558]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:28 compute-0 sudo[271583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:58:28 compute-0 sudo[271583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:28 compute-0 sudo[271583]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:28 compute-0 nova_compute[255660]: 2025-11-22 05:58:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:58:28 compute-0 nova_compute[255660]: 2025-11-22 05:58:28.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:58:28 compute-0 nova_compute[255660]: 2025-11-22 05:58:28.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:58:28 compute-0 nova_compute[255660]: 2025-11-22 05:58:28.157 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:58:28 compute-0 sudo[271608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:28 compute-0 sudo[271608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:28 compute-0 sudo[271608]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:28 compute-0 sudo[271633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:58:28 compute-0 sudo[271633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:28 compute-0 podman[271699]: 2025-11-22 05:58:28.709650468 +0000 UTC m=+0.039084158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:28 compute-0 podman[271699]: 2025-11-22 05:58:28.828562625 +0000 UTC m=+0.157996325 container create b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 05:58:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 05:58:29 compute-0 systemd[1]: Started libpod-conmon-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope.
Nov 22 05:58:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:29 compute-0 podman[271699]: 2025-11-22 05:58:29.180964908 +0000 UTC m=+0.510398658 container init b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 05:58:29 compute-0 podman[271699]: 2025-11-22 05:58:29.193112793 +0000 UTC m=+0.522546493 container start b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:58:29 compute-0 clever_lehmann[271715]: 167 167
Nov 22 05:58:29 compute-0 systemd[1]: libpod-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope: Deactivated successfully.
Nov 22 05:58:29 compute-0 podman[271699]: 2025-11-22 05:58:29.232624462 +0000 UTC m=+0.562058172 container attach b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 05:58:29 compute-0 podman[271699]: 2025-11-22 05:58:29.233098835 +0000 UTC m=+0.562532545 container died b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-808e485f8e135435b3d7822bc887ba800eb57ed95bf45e1e16136376f4a36f6c-merged.mount: Deactivated successfully.
Nov 22 05:58:29 compute-0 podman[271699]: 2025-11-22 05:58:29.57324668 +0000 UTC m=+0.902680380 container remove b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:58:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "format": "json"}]: dispatch
Nov 22 05:58:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:29 compute-0 systemd[1]: libpod-conmon-b085f41456b8d9aafacc5e65460585c7efff4e730d8bfc012283889201e4734d.scope: Deactivated successfully.
Nov 22 05:58:29 compute-0 podman[271732]: 2025-11-22 05:58:29.701368652 +0000 UTC m=+0.359462542 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 05:58:29 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:29 compute-0 podman[271766]: 2025-11-22 05:58:29.820157086 +0000 UTC m=+0.048403698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:29 compute-0 podman[271766]: 2025-11-22 05:58:29.9206866 +0000 UTC m=+0.148933152 container create d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:58:30 compute-0 ceph-mon[75840]: pgmap v1154: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 05:58:30 compute-0 systemd[1]: Started libpod-conmon-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope.
Nov 22 05:58:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:30 compute-0 podman[271766]: 2025-11-22 05:58:30.139201875 +0000 UTC m=+0.367448467 container init d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:58:30 compute-0 podman[271766]: 2025-11-22 05:58:30.151625458 +0000 UTC m=+0.379872010 container start d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:30 compute-0 podman[271766]: 2025-11-22 05:58:30.329387372 +0000 UTC m=+0.557633924 container attach d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta.tmp'
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta.tmp' to config b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326/.meta'
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 05:58:30 compute-0 hungry_nash[271783]: {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     "0": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "devices": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "/dev/loop3"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             ],
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_name": "ceph_lv0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_size": "21470642176",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "name": "ceph_lv0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "tags": {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_name": "ceph",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.crush_device_class": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.encrypted": "0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_id": "0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.vdo": "0"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             },
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "vg_name": "ceph_vg0"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         }
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     ],
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     "1": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "devices": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "/dev/loop4"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             ],
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_name": "ceph_lv1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_size": "21470642176",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "name": "ceph_lv1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "tags": {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_name": "ceph",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.crush_device_class": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.encrypted": "0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_id": "1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.vdo": "0"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             },
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "vg_name": "ceph_vg1"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         }
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     ],
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     "2": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "devices": [
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "/dev/loop5"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             ],
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_name": "ceph_lv2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_size": "21470642176",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "name": "ceph_lv2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "tags": {
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.cluster_name": "ceph",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.crush_device_class": "",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.encrypted": "0",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osd_id": "2",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:                 "ceph.vdo": "0"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             },
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "type": "block",
Nov 22 05:58:30 compute-0 hungry_nash[271783]:             "vg_name": "ceph_vg2"
Nov 22 05:58:30 compute-0 hungry_nash[271783]:         }
Nov 22 05:58:30 compute-0 hungry_nash[271783]:     ]
Nov 22 05:58:30 compute-0 hungry_nash[271783]: }
Nov 22 05:58:30 compute-0 systemd[1]: libpod-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope: Deactivated successfully.
Nov 22 05:58:30 compute-0 podman[271766]: 2025-11-22 05:58:30.92382073 +0000 UTC m=+1.152067252 container died d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:58:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-48974d0968148d86a7e95183947e789d8079c0367ea73e5a004562610b1ea54e-merged.mount: Deactivated successfully.
Nov 22 05:58:30 compute-0 podman[271766]: 2025-11-22 05:58:30.997628838 +0000 UTC m=+1.225875360 container remove d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 05:58:31 compute-0 systemd[1]: libpod-conmon-d9d044965504667f6a82da6a82d01039a83afaefce7dedc7c77458d4cde78cd3.scope: Deactivated successfully.
Nov 22 05:58:31 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "format": "json"}]: dispatch
Nov 22 05:58:31 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:31 compute-0 sudo[271633]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:31 compute-0 sudo[271806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:31 compute-0 sudo[271806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:31 compute-0 sudo[271806]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:31 compute-0 sudo[271831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:58:31 compute-0 sudo[271831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:31 compute-0 sudo[271831]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:31 compute-0 sudo[271856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:31 compute-0 sudo[271856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:31 compute-0 sudo[271856]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:31 compute-0 sudo[271881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:58:31 compute-0 sudo[271881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.796590867 +0000 UTC m=+0.058654232 container create e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:58:31 compute-0 systemd[1]: Started libpod-conmon-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope.
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.767673532 +0000 UTC m=+0.029736957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.885056648 +0000 UTC m=+0.147120073 container init e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.892022965 +0000 UTC m=+0.154086300 container start e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:58:31 compute-0 wonderful_mendeleev[271966]: 167 167
Nov 22 05:58:31 compute-0 systemd[1]: libpod-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope: Deactivated successfully.
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.896208597 +0000 UTC m=+0.158271982 container attach e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.896937486 +0000 UTC m=+0.159000841 container died e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 05:58:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-52f7385ab9750b2f12f27bfedf778df673f28c7b967a28ce045e9efe460a239b-merged.mount: Deactivated successfully.
Nov 22 05:58:31 compute-0 podman[271949]: 2025-11-22 05:58:31.932611532 +0000 UTC m=+0.194674887 container remove e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 05:58:31 compute-0 systemd[1]: libpod-conmon-e390c40734d7fda94780933b52af0729c501cf9fa600814a85b46d5211ea6117.scope: Deactivated successfully.
Nov 22 05:58:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 05:58:32 compute-0 ceph-mon[75840]: pgmap v1155: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 54 KiB/s wr, 3 op/s
Nov 22 05:58:32 compute-0 podman[271990]: 2025-11-22 05:58:32.12211216 +0000 UTC m=+0.048365167 container create 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 05:58:32 compute-0 systemd[1]: Started libpod-conmon-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope.
Nov 22 05:58:32 compute-0 podman[271990]: 2025-11-22 05:58:32.103873551 +0000 UTC m=+0.030126578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:58:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:58:32 compute-0 podman[271990]: 2025-11-22 05:58:32.22957271 +0000 UTC m=+0.155825747 container init 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 05:58:32 compute-0 podman[271990]: 2025-11-22 05:58:32.245851636 +0000 UTC m=+0.172104673 container start 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:58:32 compute-0 podman[271990]: 2025-11-22 05:58:32.250299565 +0000 UTC m=+0.176552572 container attach 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:58:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 3 op/s
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]: {
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_id": 1,
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "type": "bluestore"
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     },
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_id": 2,
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "type": "bluestore"
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     },
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_id": 0,
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:         "type": "bluestore"
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]:     }
Nov 22 05:58:33 compute-0 affectionate_lumiere[272006]: }
Nov 22 05:58:33 compute-0 systemd[1]: libpod-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Deactivated successfully.
Nov 22 05:58:33 compute-0 systemd[1]: libpod-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Consumed 1.098s CPU time.
Nov 22 05:58:33 compute-0 podman[271990]: 2025-11-22 05:58:33.331038225 +0000 UTC m=+1.257291232 container died 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:58:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad909352cedbb9b465e956bd6260782ac3d5bdc038816b2470f3cf7485ac2cef-merged.mount: Deactivated successfully.
Nov 22 05:58:33 compute-0 podman[271990]: 2025-11-22 05:58:33.407990527 +0000 UTC m=+1.334243544 container remove 09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:58:33 compute-0 systemd[1]: libpod-conmon-09f4a9e151d049af4e3f76519fdeefd6e79f74fe1b0f7051947bfec750e3d1f1.scope: Deactivated successfully.
Nov 22 05:58:33 compute-0 sudo[271881]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:58:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:58:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 730cb392-55db-45a0-83f0-eef602fe1fc3 does not exist
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 481fdc55-019a-453f-8d5b-7ce03c6b76af does not exist
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp'
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta.tmp' to config b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9/.meta'
Nov 22 05:58:33 compute-0 sudo[272051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:58:33 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aef441a3-a76b-4305-8686-8c0b89f2f1b9, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:33 compute-0 sudo[272051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:33 compute-0 sudo[272051]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:33 compute-0 sudo[272076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:58:33 compute-0 sudo[272076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:58:33 compute-0 sudo[272076]: pam_unix(sudo:session): session closed for user root
Nov 22 05:58:34 compute-0 ceph-mon[75840]: pgmap v1156: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 3 op/s
Nov 22 05:58:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074d9098-d04c-45ea-9d9a-2dcbe0a4b326' of type subvolume
Nov 22 05:58:34 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:34.405+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '074d9098-d04c-45ea-9d9a-2dcbe0a4b326' of type subvolume
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/074d9098-d04c-45ea-9d9a-2dcbe0a4b326'' moved to trashcan
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:074d9098-d04c-45ea-9d9a-2dcbe0a4b326, vol_name:cephfs) < ""
Nov 22 05:58:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 41 KiB/s wr, 3 op/s
Nov 22 05:58:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9_4b7ccb8c-a586-453d-ac99-e365a37bb6c2", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "snap_name": "aef441a3-a76b-4305-8686-8c0b89f2f1b9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "format": "json"}]: dispatch
Nov 22 05:58:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "074d9098-d04c-45ea-9d9a-2dcbe0a4b326", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:35 compute-0 ceph-mon[75840]: pgmap v1157: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 177 B/s rd, 41 KiB/s wr, 3 op/s
Nov 22 05:58:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.779 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:58:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.780 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:58:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:58:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.938 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:58:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:58:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:04ec723f-2266-44ad-8738-9d300104eaa9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:04ec723f-2266-44ad-8738-9d300104eaa9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:37 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:37.090+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04ec723f-2266-44ad-8738-9d300104eaa9' of type subvolume
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '04ec723f-2266-44ad-8738-9d300104eaa9' of type subvolume
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/04ec723f-2266-44ad-8738-9d300104eaa9'' moved to trashcan
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:04ec723f-2266-44ad-8738-9d300104eaa9, vol_name:cephfs) < ""
Nov 22 05:58:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 22 05:58:37 compute-0 ceph-mon[75840]: pgmap v1158: 321 pgs: 321 active+clean; 65 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 05:58:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 22 05:58:37 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 22 05:58:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp'
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta.tmp' to config b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d/.meta'
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d70e8843-02c1-482f-aebd-63710671186e, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 05:58:38 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "format": "json"}]: dispatch
Nov 22 05:58:38 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04ec723f-2266-44ad-8738-9d300104eaa9", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:38 compute-0 ceph-mon[75840]: osdmap e153: 3 total, 3 up, 3 in
Nov 22 05:58:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e_d41398b5-1031-4e8d-933e-c6c94e22ca32", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:39 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "snap_name": "d70e8843-02c1-482f-aebd-63710671186e", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:39 compute-0 ceph-mon[75840]: pgmap v1160: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 05:58:40 compute-0 podman[272102]: 2025-11-22 05:58:40.210519709 +0000 UTC m=+0.065315001 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 05:58:40 compute-0 podman[272103]: 2025-11-22 05:58:40.212930354 +0000 UTC m=+0.065697911 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:40 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:40 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 05:58:40 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:41 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:58:41.783 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:58:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 05:58:42 compute-0 ceph-mon[75840]: pgmap v1161: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 58 KiB/s wr, 5 op/s
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6262f914-71c2-4411-a49e-54f30a05659d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6262f914-71c2-4411-a49e-54f30a05659d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6262f914-71c2-4411-a49e-54f30a05659d' of type subvolume
Nov 22 05:58:42 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:42.334+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6262f914-71c2-4411-a49e-54f30a05659d' of type subvolume
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6262f914-71c2-4411-a49e-54f30a05659d'' moved to trashcan
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6262f914-71c2-4411-a49e-54f30a05659d, vol_name:cephfs) < ""
Nov 22 05:58:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 6 op/s
Nov 22 05:58:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 22 05:58:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 22 05:58:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 22 05:58:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 22 05:58:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 22 05:58:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:58:43
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:58:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:58:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6262f914-71c2-4411-a49e-54f30a05659d", "format": "json"}]: dispatch
Nov 22 05:58:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6262f914-71c2-4411-a49e-54f30a05659d", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:44 compute-0 ceph-mon[75840]: pgmap v1162: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 80 KiB/s wr, 6 op/s
Nov 22 05:58:44 compute-0 ceph-mon[75840]: osdmap e154: 3 total, 3 up, 3 in
Nov 22 05:58:44 compute-0 ceph-mon[75840]: osdmap e155: 3 total, 3 up, 3 in
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "format": "json"}]: dispatch
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 57 KiB/s wr, 4 op/s
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:45 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "format": "json"}]: dispatch
Nov 22 05:58:45 compute-0 ceph-mon[75840]: pgmap v1165: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 590 B/s rd, 57 KiB/s wr, 4 op/s
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:45 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:58:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:58:46 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 05:58:46 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:58:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 05:58:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:58:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:58:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:58:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:58:47 compute-0 ceph-mon[75840]: pgmap v1166: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 05:58:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:58:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1148052480' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:58:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "format": "json"}]: dispatch
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp'
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta.tmp' to config b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8/.meta'
Nov 22 05:58:48 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b64ec859-00ea-4356-8f9d-6f1d033496e0, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "target_sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, target_sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 1a33974a-a90d-4c47-97c4-c31d41cbeceb for path b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, target_sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137)
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:58:49 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "format": "json"}]: dispatch
Nov 22 05:58:49 compute-0 ceph-mon[75840]: pgmap v1167: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 80 KiB/s wr, 7 op/s
Nov 22 05:58:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0_45cbbe81-54f3-4d33-b6ed-0541e70b79ac", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:49 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "snap_name": "b64ec859-00ea-4356-8f9d-6f1d033496e0", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 3 op/s
Nov 22 05:58:50 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "target_sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:58:50 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:58:51 compute-0 ceph-mon[75840]: pgmap v1168: 321 pgs: 321 active+clean; 66 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 31 KiB/s wr, 3 op/s
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137) -- by 0 seconds
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:12681db0-dac5-4be1-a94e-db0502d683a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 415 B/s rd, 60 KiB/s wr, 6 op/s
Nov 22 05:58:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 22 05:58:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 22 05:58:52 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000414865777821189 of space, bias 4.0, pg target 0.49783893338542684 quantized to 16 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:58:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:58:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 22 05:58:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 22 05:58:53 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 22 05:58:53 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "format": "json"}]: dispatch
Nov 22 05:58:53 compute-0 ceph-mon[75840]: pgmap v1169: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 415 B/s rd, 60 KiB/s wr, 6 op/s
Nov 22 05:58:53 compute-0 ceph-mon[75840]: osdmap e156: 3 total, 3 up, 3 in
Nov 22 05:58:53 compute-0 ceph-mon[75840]: osdmap e157: 3 total, 3 up, 3 in
Nov 22 05:58:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 7 op/s
Nov 22 05:58:55 compute-0 ceph-mon[75840]: pgmap v1172: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 74 KiB/s wr, 7 op/s
Nov 22 05:58:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 43 KiB/s wr, 4 op/s
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:12681db0-dac5-4be1-a94e-db0502d683a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12681db0-dac5-4be1-a94e-db0502d683a8' of type subvolume
Nov 22 05:58:57 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:58:57.428+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '12681db0-dac5-4be1-a94e-db0502d683a8' of type subvolume
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "force": true, "format": "json"}]: dispatch
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.snap/7cb6a540-aa78-41e5-b112-51878416b681/2856a001-7e16-4367-8d2d-8c670740b800' to b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/b0c60bd9-e586-4f2c-ae51-cb9345b16ccf'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/12681db0-dac5-4be1-a94e-db0502d683a8'' moved to trashcan
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:12681db0-dac5-4be1-a94e-db0502d683a8, vol_name:cephfs) < ""
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.clone_index] untracking 1a33974a-a90d-4c47-97c4-c31d41cbeceb
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta.tmp' to config b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137/.meta'
Nov 22 05:58:57 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 6dd259a0-2767-493c-a1d5-a32b18495137)
Nov 22 05:58:58 compute-0 ceph-mon[75840]: pgmap v1173: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 43 KiB/s wr, 4 op/s
Nov 22 05:58:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:58:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:58:59 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "12681db0-dac5-4be1-a94e-db0502d683a8", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:00 compute-0 ceph-mon[75840]: pgmap v1174: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:59:00 compute-0 podman[272141]: 2025-11-22 05:59:00.291645898 +0000 UTC m=+0.143851835 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 05:59:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:00 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 49 KiB/s wr, 3 op/s
Nov 22 05:59:01 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:01 compute-0 ceph-mon[75840]: pgmap v1175: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 49 KiB/s wr, 3 op/s
Nov 22 05:59:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:02 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:59:02 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:59:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 723 B/s rd, 47 KiB/s wr, 4 op/s
Nov 22 05:59:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 22 05:59:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 22 05:59:03 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 22 05:59:03 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:03 compute-0 ceph-mon[75840]: pgmap v1176: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 723 B/s rd, 47 KiB/s wr, 4 op/s
Nov 22 05:59:03 compute-0 ceph-mon[75840]: osdmap e158: 3 total, 3 up, 3 in
Nov 22 05:59:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta.tmp'
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta.tmp' to config b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8/.meta'
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:05 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:05 compute-0 ceph-mon[75840]: pgmap v1178: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 05:59:05 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 05:59:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 05:59:06 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:07 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 05:59:07 compute-0 ceph-mon[75840]: pgmap v1179: 321 pgs: 321 active+clean; 67 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 46 KiB/s wr, 4 op/s
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 05:59:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:59:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:09 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:09 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 05:59:09 compute-0 ceph-mon[75840]: pgmap v1180: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 05:59:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "format": "json"}]: dispatch
Nov 22 05:59:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:10 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 05:59:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 22 05:59:11 compute-0 podman[272167]: 2025-11-22 05:59:11.232985148 +0000 UTC m=+0.072840983 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 05:59:11 compute-0 podman[272168]: 2025-11-22 05:59:11.240302874 +0000 UTC m=+0.077913650 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 05:59:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "format": "json"}]: dispatch
Nov 22 05:59:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:11 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:12 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "format": "json"}]: dispatch
Nov 22 05:59:12 compute-0 ceph-mon[75840]: pgmap v1181: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 44 KiB/s wr, 4 op/s
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea42c00d-c230-4795-b72e-34001c4be0a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea42c00d-c230-4795-b72e-34001c4be0a8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:12 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:12.653+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea42c00d-c230-4795-b72e-34001c4be0a8' of type subvolume
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea42c00d-c230-4795-b72e-34001c4be0a8' of type subvolume
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea42c00d-c230-4795-b72e-34001c4be0a8'' moved to trashcan
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea42c00d-c230-4795-b72e-34001c4be0a8, vol_name:cephfs) < ""
Nov 22 05:59:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 4 op/s
Nov 22 05:59:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "format": "json"}]: dispatch
Nov 22 05:59:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "format": "json"}]: dispatch
Nov 22 05:59:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea42c00d-c230-4795-b72e-34001c4be0a8", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:14 compute-0 ceph-mon[75840]: pgmap v1182: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s wr, 4 op/s
Nov 22 05:59:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "format": "json"}]: dispatch
Nov 22 05:59:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:14 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Nov 22 05:59:15 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "format": "json"}]: dispatch
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.165 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.166 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.167 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.167 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:59:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:59:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199808711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.704 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.890 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.892 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5079MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.893 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.946 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.947 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 05:59:15 compute-0 nova_compute[255660]: 2025-11-22 05:59:15.965 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 05:59:16 compute-0 ceph-mon[75840]: pgmap v1183: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 3 op/s
Nov 22 05:59:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2199808711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:59:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 05:59:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345386042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:59:16 compute-0 nova_compute[255660]: 2025-11-22 05:59:16.392 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 05:59:16 compute-0 nova_compute[255660]: 2025-11-22 05:59:16.397 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 05:59:16 compute-0 nova_compute[255660]: 2025-11-22 05:59:16.412 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 05:59:16 compute-0 nova_compute[255660]: 2025-11-22 05:59:16.413 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 05:59:16 compute-0 nova_compute[255660]: 2025-11-22 05:59:16.413 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp'
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta.tmp' to config b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc/.meta'
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e8c4aec1-9bed-494d-9cc6-4b106df55c19, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 3 op/s
Nov 22 05:59:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3345386042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 05:59:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19_85eccfbd-7b38-4356-b114-dfaefadf6ee5", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:18 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "snap_name": "e8c4aec1-9bed-494d-9cc6-4b106df55c19", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:18 compute-0 ceph-mon[75840]: pgmap v1184: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s wr, 3 op/s
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 83 KiB/s wr, 5 op/s
Nov 22 05:59:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d_84b80663-4f8e-4ed8-afc4-2b3b3f9e14c9", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:20 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "6a5d1464-36eb-4f65-a22f-d6e8dfb31c4d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:20 compute-0 ceph-mon[75840]: pgmap v1185: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 83 KiB/s wr, 5 op/s
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:20 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:20.100+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f58a5d10-062f-4cf1-87a0-845f4b3226dc' of type subvolume
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f58a5d10-062f-4cf1-87a0-845f4b3226dc' of type subvolume
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f58a5d10-062f-4cf1-87a0-845f4b3226dc'' moved to trashcan
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f58a5d10-062f-4cf1-87a0-845f4b3226dc, vol_name:cephfs) < ""
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 22 05:59:21 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f58a5d10-062f-4cf1-87a0-845f4b3226dc", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mon[75840]: pgmap v1186: 321 pgs: 321 active+clean; 68 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta.tmp'
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta.tmp' to config b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d/.meta'
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "format": "json"}]: dispatch
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:21 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 05:59:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:22 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "format": "json"}]: dispatch
Nov 22 05:59:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:59:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 22 05:59:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 22 05:59:23 compute-0 ceph-mon[75840]: pgmap v1187: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:59:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 22 05:59:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:23 compute-0 nova_compute[255660]: 2025-11-22 05:59:23.409 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:23 compute-0 nova_compute[255660]: 2025-11-22 05:59:23.441 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:24 compute-0 ceph-mon[75840]: osdmap e159: 3 total, 3 up, 3 in
Nov 22 05:59:24 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 22 05:59:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:24 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 05:59:25 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 22 05:59:25 compute-0 ceph-mon[75840]: pgmap v1189: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 05:59:25 compute-0 nova_compute[255660]: 2025-11-22 05:59:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:25 compute-0 nova_compute[255660]: 2025-11-22 05:59:25.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:25 compute-0 nova_compute[255660]: 2025-11-22 05:59:25.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 05:59:26 compute-0 nova_compute[255660]: 2025-11-22 05:59:26.126 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:26 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:667e8bfa-a29c-4ad9-967f-02f89f43b812, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:27 compute-0 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:27 compute-0 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:27 compute-0 nova_compute[255660]: 2025-11-22 05:59:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812_f3ed6427-14c4-4c3c-91df-3002a87409c7", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:27 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "667e8bfa-a29c-4ad9-967f-02f89f43b812", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:27 compute-0 ceph-mon[75840]: pgmap v1190: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 88 KiB/s wr, 7 op/s
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:28 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:28.105+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d' of type subvolume
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d' of type subvolume
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d'' moved to trashcan
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d, vol_name:cephfs) < ""
Nov 22 05:59:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 05:59:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "format": "json"}]: dispatch
Nov 22 05:59:30 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ae65e570-cdcd-47ba-b14c-0ccf6fa8b44d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:30 compute-0 ceph-mon[75840]: pgmap v1191: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 05:59:30 compute-0 nova_compute[255660]: 2025-11-22 05:59:30.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 05:59:30 compute-0 nova_compute[255660]: 2025-11-22 05:59:30.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 05:59:30 compute-0 nova_compute[255660]: 2025-11-22 05:59:30.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 05:59:30 compute-0 nova_compute[255660]: 2025-11-22 05:59:30.150 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 05:59:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "format": "json"}]: dispatch
Nov 22 05:59:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:30 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 05:59:31 compute-0 podman[272249]: 2025-11-22 05:59:31.314214138 +0000 UTC m=+0.170835999 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta.tmp'
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta.tmp' to config b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d/.meta'
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:31 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:31 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:31 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:32 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "format": "json"}]: dispatch
Nov 22 05:59:32 compute-0 ceph-mon[75840]: pgmap v1192: 321 pgs: 321 active+clean; 69 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 84 KiB/s wr, 6 op/s
Nov 22 05:59:32 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 74 KiB/s wr, 4 op/s
Nov 22 05:59:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 22 05:59:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:33 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 05:59:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 22 05:59:33 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 22 05:59:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 22 05:59:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 22 05:59:33 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 22 05:59:33 compute-0 sudo[272275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:33 compute-0 sudo[272275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:33 compute-0 sudo[272275]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:33 compute-0 sudo[272300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:59:33 compute-0 sudo[272300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:33 compute-0 sudo[272300]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:33 compute-0 sudo[272325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:33 compute-0 sudo[272325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:33 compute-0 sudo[272325]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:34 compute-0 sudo[272350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 05:59:34 compute-0 sudo[272350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:34 compute-0 ceph-mon[75840]: pgmap v1193: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 74 KiB/s wr, 4 op/s
Nov 22 05:59:34 compute-0 ceph-mon[75840]: osdmap e160: 3 total, 3 up, 3 in
Nov 22 05:59:34 compute-0 ceph-mon[75840]: osdmap e161: 3 total, 3 up, 3 in
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91849941-04fc-4d5c-809e-4a9e43af8a9b, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:34 compute-0 sudo[272350]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev dfbffaf2-fc72-4ebb-9d8f-fdc410a1b86b does not exist
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev aa3c5b55-6c36-4eeb-902c-1f2a9026b532 does not exist
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 1c525ea3-d11e-4374-8382-fd43b58dc835 does not exist
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:59:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 05:59:34 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:59:34 compute-0 sudo[272406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:34 compute-0 sudo[272406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:34 compute-0 sudo[272406]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:34 compute-0 sudo[272431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:59:34 compute-0 sudo[272431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:34 compute-0 sudo[272431]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:34 compute-0 sudo[272456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:34 compute-0 sudo[272456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:34 compute-0 sudo[272456]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 5 op/s
Nov 22 05:59:34 compute-0 sudo[272481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 05:59:34 compute-0 sudo[272481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b_069edcb7-8804-49a7-b7ef-0c39ebac6aee", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "91849941-04fc-4d5c-809e-4a9e43af8a9b", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 05:59:35 compute-0 ceph-mon[75840]: pgmap v1196: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 92 KiB/s wr, 5 op/s
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.326791361 +0000 UTC m=+0.056432994 container create bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:59:35 compute-0 systemd[1]: Started libpod-conmon-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope.
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.297360342 +0000 UTC m=+0.027002055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.429051571 +0000 UTC m=+0.158693294 container init bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.438458813 +0000 UTC m=+0.168100486 container start bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.44317907 +0000 UTC m=+0.172820743 container attach bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:35 compute-0 funny_zhukovsky[272564]: 167 167
Nov 22 05:59:35 compute-0 systemd[1]: libpod-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope: Deactivated successfully.
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.446252552 +0000 UTC m=+0.175894195 container died bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 05:59:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-59840e9aa668b49b4f7bd6c40535dd95c4218b2dfd5017620c6caddfb6a2eb21-merged.mount: Deactivated successfully.
Nov 22 05:59:35 compute-0 podman[272547]: 2025-11-22 05:59:35.499319464 +0000 UTC m=+0.228961097 container remove bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_zhukovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:59:35 compute-0 systemd[1]: libpod-conmon-bba39b3109a600adcb99a6a0b4b76688025c00959f6ecf18c9cb31653ec0237b.scope: Deactivated successfully.
Nov 22 05:59:35 compute-0 podman[272587]: 2025-11-22 05:59:35.713848422 +0000 UTC m=+0.042881009 container create a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:59:35 compute-0 systemd[1]: Started libpod-conmon-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope.
Nov 22 05:59:35 compute-0 podman[272587]: 2025-11-22 05:59:35.698939543 +0000 UTC m=+0.027972150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:35 compute-0 podman[272587]: 2025-11-22 05:59:35.841043641 +0000 UTC m=+0.170076268 container init a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:59:35 compute-0 podman[272587]: 2025-11-22 05:59:35.855631421 +0000 UTC m=+0.184664048 container start a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:59:35 compute-0 podman[272587]: 2025-11-22 05:59:35.860683457 +0000 UTC m=+0.189716054 container attach a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:36 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:36.685+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe5f4c39-d36f-406f-9522-4233e36c1e1d' of type subvolume
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fe5f4c39-d36f-406f-9522-4233e36c1e1d' of type subvolume
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fe5f4c39-d36f-406f-9522-4233e36c1e1d'' moved to trashcan
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fe5f4c39-d36f-406f-9522-4233e36c1e1d, vol_name:cephfs) < ""
Nov 22 05:59:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 3 op/s
Nov 22 05:59:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 05:59:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.939 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 05:59:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 05:59:36 compute-0 hungry_burnell[272604]: --> passed data devices: 0 physical, 3 LVM
Nov 22 05:59:36 compute-0 hungry_burnell[272604]: --> relative data size: 1.0
Nov 22 05:59:36 compute-0 hungry_burnell[272604]: --> All data devices are unavailable
Nov 22 05:59:37 compute-0 systemd[1]: libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Deactivated successfully.
Nov 22 05:59:37 compute-0 systemd[1]: libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Consumed 1.098s CPU time.
Nov 22 05:59:37 compute-0 conmon[272604]: conmon a0d36d375d42023e267d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope/container/memory.events
Nov 22 05:59:37 compute-0 podman[272587]: 2025-11-22 05:59:37.006136361 +0000 UTC m=+1.335168988 container died a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 05:59:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-073c46a3942d3b38a134bf2c1e6995cb402ee5f62dbfcf51976c0d98cb5f2a3e-merged.mount: Deactivated successfully.
Nov 22 05:59:37 compute-0 podman[272587]: 2025-11-22 05:59:37.068903323 +0000 UTC m=+1.397935910 container remove a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 05:59:37 compute-0 systemd[1]: libpod-conmon-a0d36d375d42023e267deed9fb7ec3c8728e40b140f06bfc9b152c9a63eec70e.scope: Deactivated successfully.
Nov 22 05:59:37 compute-0 sudo[272481]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:37 compute-0 sudo[272645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:37 compute-0 sudo[272645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:37 compute-0 sudo[272645]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:37 compute-0 sudo[272670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:59:37 compute-0 sudo[272670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:37 compute-0 sudo[272670]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:37 compute-0 sudo[272695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:37 compute-0 sudo[272695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:37 compute-0 sudo[272695]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:37 compute-0 sudo[272720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 05:59:37 compute-0 sudo[272720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:37 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "format": "json"}]: dispatch
Nov 22 05:59:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:37 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:37 compute-0 podman[272785]: 2025-11-22 05:59:37.953187959 +0000 UTC m=+0.070204453 container create 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 05:59:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 22 05:59:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "format": "json"}]: dispatch
Nov 22 05:59:37 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fe5f4c39-d36f-406f-9522-4233e36c1e1d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:37 compute-0 ceph-mon[75840]: pgmap v1197: 321 pgs: 321 active+clean; 69 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 3 op/s
Nov 22 05:59:37 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 22 05:59:37 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 22 05:59:37 compute-0 systemd[1]: Started libpod-conmon-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope.
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:37.919714632 +0000 UTC m=+0.036731116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:38.053105936 +0000 UTC m=+0.170122430 container init 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:38.060705899 +0000 UTC m=+0.177722333 container start 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:38.064237905 +0000 UTC m=+0.181254319 container attach 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 05:59:38 compute-0 nervous_tu[272801]: 167 167
Nov 22 05:59:38 compute-0 systemd[1]: libpod-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope: Deactivated successfully.
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:38.067904073 +0000 UTC m=+0.184920507 container died 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 05:59:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea0ffd43520c2ca91b43d59b4ccca942c2936532a09864ab645ad81f59ce860-merged.mount: Deactivated successfully.
Nov 22 05:59:38 compute-0 podman[272785]: 2025-11-22 05:59:38.114574313 +0000 UTC m=+0.231590737 container remove 39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:38 compute-0 systemd[1]: libpod-conmon-39d9fdf94b34be725b18c8264e4f9acf359ed46d4176180f0d55961a08b97937.scope: Deactivated successfully.
Nov 22 05:59:38 compute-0 podman[272827]: 2025-11-22 05:59:38.341653278 +0000 UTC m=+0.052714474 container create 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:38 compute-0 systemd[1]: Started libpod-conmon-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope.
Nov 22 05:59:38 compute-0 podman[272827]: 2025-11-22 05:59:38.317900032 +0000 UTC m=+0.028961048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:38 compute-0 podman[272827]: 2025-11-22 05:59:38.457826521 +0000 UTC m=+0.168887517 container init 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:59:38 compute-0 podman[272827]: 2025-11-22 05:59:38.474714534 +0000 UTC m=+0.185775490 container start 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:38 compute-0 podman[272827]: 2025-11-22 05:59:38.480895169 +0000 UTC m=+0.191956165 container attach 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 05:59:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 99 KiB/s wr, 4 op/s
Nov 22 05:59:38 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "format": "json"}]: dispatch
Nov 22 05:59:38 compute-0 ceph-mon[75840]: osdmap e162: 3 total, 3 up, 3 in
Nov 22 05:59:39 compute-0 angry_pare[272843]: {
Nov 22 05:59:39 compute-0 angry_pare[272843]:     "0": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:         {
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "devices": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "/dev/loop3"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             ],
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_name": "ceph_lv0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_size": "21470642176",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "name": "ceph_lv0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "tags": {
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.crush_device_class": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.encrypted": "0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_id": "0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.vdo": "0"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             },
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "vg_name": "ceph_vg0"
Nov 22 05:59:39 compute-0 angry_pare[272843]:         }
Nov 22 05:59:39 compute-0 angry_pare[272843]:     ],
Nov 22 05:59:39 compute-0 angry_pare[272843]:     "1": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:         {
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "devices": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "/dev/loop4"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             ],
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_name": "ceph_lv1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_size": "21470642176",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "name": "ceph_lv1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "tags": {
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.crush_device_class": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.encrypted": "0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_id": "1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.vdo": "0"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             },
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "vg_name": "ceph_vg1"
Nov 22 05:59:39 compute-0 angry_pare[272843]:         }
Nov 22 05:59:39 compute-0 angry_pare[272843]:     ],
Nov 22 05:59:39 compute-0 angry_pare[272843]:     "2": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:         {
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "devices": [
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "/dev/loop5"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             ],
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_name": "ceph_lv2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_size": "21470642176",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "name": "ceph_lv2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "tags": {
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.cluster_name": "ceph",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.crush_device_class": "",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.encrypted": "0",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osd_id": "2",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:                 "ceph.vdo": "0"
Nov 22 05:59:39 compute-0 angry_pare[272843]:             },
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "type": "block",
Nov 22 05:59:39 compute-0 angry_pare[272843]:             "vg_name": "ceph_vg2"
Nov 22 05:59:39 compute-0 angry_pare[272843]:         }
Nov 22 05:59:39 compute-0 angry_pare[272843]:     ]
Nov 22 05:59:39 compute-0 angry_pare[272843]: }
Nov 22 05:59:39 compute-0 systemd[1]: libpod-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope: Deactivated successfully.
Nov 22 05:59:39 compute-0 podman[272827]: 2025-11-22 05:59:39.266337656 +0000 UTC m=+0.977398642 container died 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 05:59:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-be18bbc60a122cb925ea32db66243091a9987a7884cbc320db6a70cb6ba8f94c-merged.mount: Deactivated successfully.
Nov 22 05:59:39 compute-0 podman[272827]: 2025-11-22 05:59:39.342716523 +0000 UTC m=+1.053777479 container remove 1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:59:39 compute-0 systemd[1]: libpod-conmon-1bb84fe7a333d0313286a0766e976a5978551bdf0bea6cb81519b61c6bb01488.scope: Deactivated successfully.
Nov 22 05:59:39 compute-0 sudo[272720]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:39 compute-0 sudo[272867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:39 compute-0 sudo[272867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:39 compute-0 sudo[272867]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:39 compute-0 sudo[272892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 05:59:39 compute-0 sudo[272892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:39 compute-0 sudo[272892]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:39 compute-0 sudo[272917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:39 compute-0 sudo[272917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:39 compute-0 sudo[272917]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:39 compute-0 sudo[272942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 05:59:39 compute-0 sudo[272942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:39 compute-0 ceph-mon[75840]: pgmap v1199: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 99 KiB/s wr, 4 op/s
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.111864963 +0000 UTC m=+0.040372233 container create 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 05:59:40 compute-0 systemd[1]: Started libpod-conmon-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope.
Nov 22 05:59:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6dd259a0-2767-493c-a1d5-a32b18495137, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.095305019 +0000 UTC m=+0.023812309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.195957757 +0000 UTC m=+0.124465047 container init 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6dd259a0-2767-493c-a1d5-a32b18495137'' moved to trashcan
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6dd259a0-2767-493c-a1d5-a32b18495137, vol_name:cephfs) < ""
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.211273377 +0000 UTC m=+0.139780687 container start 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.215675465 +0000 UTC m=+0.144182765 container attach 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 05:59:40 compute-0 boring_moser[273025]: 167 167
Nov 22 05:59:40 compute-0 systemd[1]: libpod-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope: Deactivated successfully.
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.219505718 +0000 UTC m=+0.148013058 container died 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 05:59:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b57ee495d9acd81b8e79c7a774fca2453734da5beeeae4e756c4639f6df91d04-merged.mount: Deactivated successfully.
Nov 22 05:59:40 compute-0 podman[273008]: 2025-11-22 05:59:40.260889997 +0000 UTC m=+0.189397267 container remove 1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_moser, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 05:59:40 compute-0 systemd[1]: libpod-conmon-1ae4bc9bc31101d2d0c4590c95978e7fcb005809688c2531eb5b8f6578ab7e8d.scope: Deactivated successfully.
Nov 22 05:59:40 compute-0 podman[273049]: 2025-11-22 05:59:40.465390577 +0000 UTC m=+0.071186379 container create f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 05:59:40 compute-0 systemd[1]: Started libpod-conmon-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope.
Nov 22 05:59:40 compute-0 podman[273049]: 2025-11-22 05:59:40.432047713 +0000 UTC m=+0.037843575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 05:59:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 05:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 05:59:40 compute-0 podman[273049]: 2025-11-22 05:59:40.579615207 +0000 UTC m=+0.185411059 container init f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:40 compute-0 podman[273049]: 2025-11-22 05:59:40.59537427 +0000 UTC m=+0.201170032 container start f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 05:59:40 compute-0 podman[273049]: 2025-11-22 05:59:40.598845863 +0000 UTC m=+0.204641715 container attach f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 05:59:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 76 KiB/s wr, 3 op/s
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]: {
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_id": 1,
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "type": "bluestore"
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     },
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_id": 2,
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "type": "bluestore"
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     },
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_id": 0,
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:         "type": "bluestore"
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]:     }
Nov 22 05:59:41 compute-0 blissful_varahamihira[273065]: }
Nov 22 05:59:41 compute-0 systemd[1]: libpod-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Deactivated successfully.
Nov 22 05:59:41 compute-0 podman[273049]: 2025-11-22 05:59:41.785953403 +0000 UTC m=+1.391749165 container died f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 05:59:41 compute-0 systemd[1]: libpod-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Consumed 1.196s CPU time.
Nov 22 05:59:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-70263d3250ffbb4db3f2f34922b83a499efe194a4f36a55b3f3e9e741feadd92-merged.mount: Deactivated successfully.
Nov 22 05:59:41 compute-0 podman[273049]: 2025-11-22 05:59:41.845052467 +0000 UTC m=+1.450848229 container remove f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 05:59:41 compute-0 systemd[1]: libpod-conmon-f00a949625b8ed6a1da8314cd8749e855f05d648e1cfc4d058381a3e017ae410.scope: Deactivated successfully.
Nov 22 05:59:41 compute-0 sudo[272942]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 05:59:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 05:59:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e6238643-41fe-4f73-99b6-f536a9f8405d does not exist
Nov 22 05:59:41 compute-0 podman[273098]: 2025-11-22 05:59:41.90788435 +0000 UTC m=+0.080000144 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 05:59:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9f8757b8-6b04-49ba-9168-da034a39a02b does not exist
Nov 22 05:59:41 compute-0 podman[273107]: 2025-11-22 05:59:41.926510669 +0000 UTC m=+0.098532741 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 05:59:41 compute-0 sudo[273147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 05:59:41 compute-0 sudo[273147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:41 compute-0 sudo[273147]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "format": "json"}]: dispatch
Nov 22 05:59:42 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6dd259a0-2767-493c-a1d5-a32b18495137", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:42 compute-0 ceph-mon[75840]: pgmap v1200: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 261 B/s rd, 76 KiB/s wr, 3 op/s
Nov 22 05:59:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 05:59:42 compute-0 sudo[273172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 05:59:42 compute-0 sudo[273172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 05:59:42 compute-0 sudo[273172]: pam_unix(sudo:session): session closed for user root
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a1f4aa87-8f0d-4096-b514-6eead3321ab7, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 81 KiB/s wr, 5 op/s
Nov 22 05:59:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 22 05:59:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 22 05:59:43 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp'
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta.tmp' to config b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e/.meta'
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7cb6a540-aa78-41e5-b112-51878416b681, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_05:59:43
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'volumes', 'backups']
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 05:59:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 05:59:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7_b4f5f2ab-bb7c-43c0-aeca-d65879453a15", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:44 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "a1f4aa87-8f0d-4096-b514-6eead3321ab7", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:44 compute-0 ceph-mon[75840]: pgmap v1201: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 81 KiB/s wr, 5 op/s
Nov 22 05:59:44 compute-0 ceph-mon[75840]: osdmap e163: 3 total, 3 up, 3 in
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 05:59:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 96 KiB/s wr, 5 op/s
Nov 22 05:59:45 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681_4b9a1734-eccd-4930-a293-83126ba93df5", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:45 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "snap_name": "7cb6a540-aa78-41e5-b112-51878416b681", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:46 compute-0 ceph-mon[75840]: pgmap v1203: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 96 KiB/s wr, 5 op/s
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 344 B/s rd, 20 KiB/s wr, 2 op/s
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:46 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:46.972+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ea650d4-0ea6-408a-8107-7d06795baf3e' of type subvolume
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8ea650d4-0ea6-408a-8107-7d06795baf3e' of type subvolume
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8ea650d4-0ea6-408a-8107-7d06795baf3e'' moved to trashcan
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:46 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8ea650d4-0ea6-408a-8107-7d06795baf3e, vol_name:cephfs) < ""
Nov 22 05:59:47 compute-0 ceph-mon[75840]: pgmap v1204: 321 pgs: 321 active+clean; 70 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 344 B/s rd, 20 KiB/s wr, 2 op/s
Nov 22 05:59:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "format": "json"}]: dispatch
Nov 22 05:59:47 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8ea650d4-0ea6-408a-8107-7d06795baf3e", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 05:59:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:59:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 05:59:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:59:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "format": "json"}]: dispatch
Nov 22 05:59:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:47 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 22 05:59:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 05:59:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/841035940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 05:59:48 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "format": "json"}]: dispatch
Nov 22 05:59:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 22 05:59:48 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 22 05:59:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:59:49 compute-0 ceph-mon[75840]: osdmap e164: 3 total, 3 up, 3 in
Nov 22 05:59:49 compute-0 ceph-mon[75840]: pgmap v1206: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 7 op/s
Nov 22 05:59:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 70 KiB/s wr, 5 op/s
Nov 22 05:59:51 compute-0 ceph-mon[75840]: pgmap v1207: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 70 KiB/s wr, 5 op/s
Nov 22 05:59:52 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:52.381 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 05:59:52 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:52.382 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 69 KiB/s wr, 5 op/s
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:52 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b3b5af82-6f64-44d3-be30-5f5255e6da69, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00047731185290095723 of space, bias 4.0, pg target 0.5727742234811487 quantized to 16 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 05:59:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 05:59:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 22 05:59:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 22 05:59:53 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 22 05:59:54 compute-0 ceph-mon[75840]: pgmap v1208: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 538 B/s rd, 69 KiB/s wr, 5 op/s
Nov 22 05:59:54 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69_81755b2d-8933-4607-923d-d11f8165f30d", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:54 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "b3b5af82-6f64-44d3-be30-5f5255e6da69", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:54 compute-0 ceph-mon[75840]: osdmap e165: 3 total, 3 up, 3 in
Nov 22 05:59:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 81 KiB/s wr, 6 op/s
Nov 22 05:59:56 compute-0 ceph-mon[75840]: pgmap v1210: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 81 KiB/s wr, 6 op/s
Nov 22 05:59:56 compute-0 ovn_metadata_agent[164613]: 2025-11-22 05:59:56.384 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp'
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta.tmp' to config b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f/.meta'
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d375caa0-bb8d-47a9-9906-e56f6c4b9b10, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 11 KiB/s wr, 1 op/s
Nov 22 05:59:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 22 05:59:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10_69301942-28c7-4014-a246-50ecf9648404", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:58 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "snap_name": "d375caa0-bb8d-47a9-9906-e56f6c4b9b10", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:58 compute-0 ceph-mon[75840]: pgmap v1211: 321 pgs: 321 active+clean; 71 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 11 KiB/s wr, 1 op/s
Nov 22 05:59:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 22 05:59:58 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 22 05:59:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 05:59:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 05:59:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 22 05:59:59 compute-0 ceph-mon[75840]: osdmap e166: 3 total, 3 up, 3 in
Nov 22 05:59:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 22 05:59:59 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 05:59:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T05:59:59.981+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18e9f280-7994-4c49-95f7-6a6f9ebabd4f' of type subvolume
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18e9f280-7994-4c49-95f7-6a6f9ebabd4f' of type subvolume
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "force": true, "format": "json"}]: dispatch
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18e9f280-7994-4c49-95f7-6a6f9ebabd4f'' moved to trashcan
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 05:59:59 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18e9f280-7994-4c49-95f7-6a6f9ebabd4f, vol_name:cephfs) < ""
Nov 22 06:00:00 compute-0 ceph-mon[75840]: pgmap v1213: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 49 KiB/s wr, 4 op/s
Nov 22 06:00:00 compute-0 ceph-mon[75840]: osdmap e167: 3 total, 3 up, 3 in
Nov 22 06:00:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 22 06:00:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 06:00:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2130918775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 06:00:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "format": "json"}]: dispatch
Nov 22 06:00:00 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18e9f280-7994-4c49-95f7-6a6f9ebabd4f", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 06:00:01 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 06:00:01 compute-0 ceph-mon[75840]: pgmap v1215: 321 pgs: 321 active+clean; 71 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 272 B/s rd, 40 KiB/s wr, 3 op/s
Nov 22 06:00:02 compute-0 podman[273198]: 2025-11-22 06:00:02.225270542 +0000 UTC m=+0.085895173 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 06:00:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 6 op/s
Nov 22 06:00:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "format": "json"}]: dispatch
Nov 22 06:00:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:03 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 22 06:00:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 22 06:00:03 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 22 06:00:04 compute-0 ceph-mon[75840]: pgmap v1216: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 92 KiB/s wr, 6 op/s
Nov 22 06:00:04 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "format": "json"}]: dispatch
Nov 22 06:00:04 compute-0 ceph-mon[75840]: osdmap e168: 3 total, 3 up, 3 in
Nov 22 06:00:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 64 KiB/s wr, 4 op/s
Nov 22 06:00:06 compute-0 ceph-mon[75840]: pgmap v1218: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 64 KiB/s wr, 4 op/s
Nov 22 06:00:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s
Nov 22 06:00:08 compute-0 ceph-mon[75840]: pgmap v1219: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 55 KiB/s wr, 3 op/s
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp'
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta.tmp' to config b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091/.meta'
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:41f46daf-9a06-4ed4-add0-ee36e9947701, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 418 B/s rd, 62 KiB/s wr, 4 op/s
Nov 22 06:00:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701_f12cbc90-6f8e-4e88-9d93-d7bc80b572a3", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:10 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "snap_name": "41f46daf-9a06-4ed4-add0-ee36e9947701", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:10 compute-0 ceph-mon[75840]: pgmap v1220: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 418 B/s rd, 62 KiB/s wr, 4 op/s
Nov 22 06:00:10 compute-0 sshd-session[273224]: Invalid user solana from 80.94.92.166 port 51708
Nov 22 06:00:10 compute-0 sshd-session[273224]: Connection closed by invalid user solana 80.94.92.166 port 51708 [preauth]
Nov 22 06:00:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 60 KiB/s wr, 3 op/s
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 06:00:12 compute-0 ceph-mon[75840]: pgmap v1221: 321 pgs: 321 active+clean; 71 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 60 KiB/s wr, 3 op/s
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 22 06:00:12 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:00:12.077+0000 7f5339360640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a01560f-b8db-4a3a-8f6c-493d0f32d091' of type subvolume
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a01560f-b8db-4a3a-8f6c-493d0f32d091' of type subvolume
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4a01560f-b8db-4a3a-8f6c-493d0f32d091'' moved to trashcan
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a01560f-b8db-4a3a-8f6c-493d0f32d091, vol_name:cephfs) < ""
Nov 22 06:00:12 compute-0 podman[273226]: 2025-11-22 06:00:12.210726506 +0000 UTC m=+0.070635194 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 06:00:12 compute-0 podman[273227]: 2025-11-22 06:00:12.214080865 +0000 UTC m=+0.073795768 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 06:00:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 06:00:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 22 06:00:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 22 06:00:13 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "format": "json"}]: dispatch
Nov 22 06:00:13 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 22 06:00:13 compute-0 nova_compute[255660]: 2025-11-22 06:00:13.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:13 compute-0 nova_compute[255660]: 2025-11-22 06:00:13.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 06:00:13 compute-0 nova_compute[255660]: 2025-11-22 06:00:13.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 06:00:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b778070>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b778c10>)]
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 06:00:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 06:00:14 compute-0 ceph-mon[75840]: from='client.14357 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4a01560f-b8db-4a3a-8f6c-493d0f32d091", "force": true, "format": "json"}]: dispatch
Nov 22 06:00:14 compute-0 ceph-mon[75840]: pgmap v1222: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 06:00:14 compute-0 ceph-mon[75840]: osdmap e169: 3 total, 3 up, 3 in
Nov 22 06:00:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 06:00:15 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.mscchl(active, since 35m)
Nov 22 06:00:15 compute-0 ceph-mon[75840]: pgmap v1224: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 2 op/s
Nov 22 06:00:16 compute-0 ceph-mon[75840]: mgrmap e15: compute-0.mscchl(active, since 35m)
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.153 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.188 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.189 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.190 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:00:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:00:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145969244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.647 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.861 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.863 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:00:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.930 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.931 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:00:16 compute-0 nova_compute[255660]: 2025-11-22 06:00:16.959 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:00:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1145969244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:00:17 compute-0 ceph-mon[75840]: pgmap v1225: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 59 KiB/s wr, 3 op/s
Nov 22 06:00:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:00:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72605276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:00:17 compute-0 nova_compute[255660]: 2025-11-22 06:00:17.442 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:00:17 compute-0 nova_compute[255660]: 2025-11-22 06:00:17.449 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:00:17 compute-0 nova_compute[255660]: 2025-11-22 06:00:17.480 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:00:17 compute-0 nova_compute[255660]: 2025-11-22 06:00:17.482 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:00:17 compute-0 nova_compute[255660]: 2025-11-22 06:00:17.483 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:00:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/72605276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:00:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Nov 22 06:00:19 compute-0 ceph-mon[75840]: pgmap v1226: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Nov 22 06:00:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 3 op/s
Nov 22 06:00:21 compute-0 ceph-mon[75840]: pgmap v1227: 321 pgs: 321 active+clean; 72 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 65 KiB/s wr, 3 op/s
Nov 22 06:00:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 06:00:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 22 06:00:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 22 06:00:23 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 22 06:00:24 compute-0 ceph-mon[75840]: pgmap v1228: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 06:00:24 compute-0 ceph-mon[75840]: osdmap e170: 3 total, 3 up, 3 in
Nov 22 06:00:24 compute-0 nova_compute[255660]: 2025-11-22 06:00:24.458 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 06:00:26 compute-0 ceph-mon[75840]: pgmap v1230: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 2 op/s
Nov 22 06:00:26 compute-0 nova_compute[255660]: 2025-11-22 06:00:26.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:26 compute-0 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:26 compute-0 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:26 compute-0 nova_compute[255660]: 2025-11-22 06:00:26.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:00:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 1 op/s
Nov 22 06:00:28 compute-0 ceph-mon[75840]: pgmap v1231: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 23 KiB/s wr, 1 op/s
Nov 22 06:00:28 compute-0 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:28 compute-0 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:28 compute-0 nova_compute[255660]: 2025-11-22 06:00:28.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 06:00:29 compute-0 nova_compute[255660]: 2025-11-22 06:00:29.144 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:29 compute-0 nova_compute[255660]: 2025-11-22 06:00:29.145 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:29 compute-0 nova_compute[255660]: 2025-11-22 06:00:29.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 06:00:30 compute-0 ceph-mon[75840]: pgmap v1232: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 06:00:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 06:00:31 compute-0 ceph-mon[75840]: pgmap v1233: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s wr, 0 op/s
Nov 22 06:00:31 compute-0 nova_compute[255660]: 2025-11-22 06:00:31.147 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:00:31 compute-0 nova_compute[255660]: 2025-11-22 06:00:31.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:00:31 compute-0 nova_compute[255660]: 2025-11-22 06:00:31.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:00:31 compute-0 nova_compute[255660]: 2025-11-22 06:00:31.164 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:00:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 22 06:00:33 compute-0 podman[273306]: 2025-11-22 06:00:33.104762581 +0000 UTC m=+0.130880148 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 06:00:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:33 compute-0 ceph-mon[75840]: pgmap v1234: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 22 06:00:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s wr, 0 op/s
Nov 22 06:00:36 compute-0 ceph-mon[75840]: pgmap v1235: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s wr, 0 op/s
Nov 22 06:00:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 06:00:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.940 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:00:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:00:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:00:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:00:38 compute-0 ceph-mon[75840]: pgmap v1236: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 06:00:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 06:00:40 compute-0 ceph-mon[75840]: pgmap v1237: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 22 06:00:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:42 compute-0 ceph-mon[75840]: pgmap v1238: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:42 compute-0 sudo[273334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:42 compute-0 sudo[273334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:42 compute-0 sudo[273334]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:42 compute-0 sudo[273359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:00:42 compute-0 sudo[273359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:42 compute-0 sudo[273359]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:42 compute-0 sudo[273397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:42 compute-0 sudo[273397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:42 compute-0 sudo[273397]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:42 compute-0 podman[273383]: 2025-11-22 06:00:42.423495229 +0000 UTC m=+0.115044284 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 06:00:42 compute-0 podman[273384]: 2025-11-22 06:00:42.424000652 +0000 UTC m=+0.116548924 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 06:00:42 compute-0 sudo[273449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:00:42 compute-0 sudo[273449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:42 compute-0 sudo[273449]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c92dfd75-1d82-4223-b294-3b3d75830d90 does not exist
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev cf4c472c-a662-4263-8c64-b19a95745a5f does not exist
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev b1f70152-7188-4c6e-af59-0c7b8df006de does not exist
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:00:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:00:43 compute-0 sudo[273507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:43 compute-0 sudo[273507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:43 compute-0 sudo[273507]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:43 compute-0 sudo[273532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:00:43 compute-0 sudo[273532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:43 compute-0 sudo[273532]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:43 compute-0 sudo[273557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:43 compute-0 sudo[273557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:43 compute-0 sudo[273557]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:43 compute-0 sudo[273582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:00:43 compute-0 sudo[273582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:00:43
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.862259133 +0000 UTC m=+0.057330368 container create 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:00:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:00:43 compute-0 systemd[1]: Started libpod-conmon-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope.
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.841331722 +0000 UTC m=+0.036402977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.962547939 +0000 UTC m=+0.157619234 container init 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.969086245 +0000 UTC m=+0.164157480 container start 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.972739313 +0000 UTC m=+0.167810638 container attach 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:00:43 compute-0 beautiful_banzai[273665]: 167 167
Nov 22 06:00:43 compute-0 systemd[1]: libpod-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope: Deactivated successfully.
Nov 22 06:00:43 compute-0 conmon[273665]: conmon 0ab9d8809f7ebbf9cb3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope/container/memory.events
Nov 22 06:00:43 compute-0 podman[273648]: 2025-11-22 06:00:43.976245447 +0000 UTC m=+0.171316712 container died 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-87d56529a879a6a9c208d08528200082d980b8cdb8b6f4c2fc26ee0e1e3fed67-merged.mount: Deactivated successfully.
Nov 22 06:00:44 compute-0 podman[273648]: 2025-11-22 06:00:44.031909338 +0000 UTC m=+0.226980583 container remove 0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 06:00:44 compute-0 systemd[1]: libpod-conmon-0ab9d8809f7ebbf9cb3c2aae43dab2e6528ff0327f1b8f90c338e2b1a49cdf21.scope: Deactivated successfully.
Nov 22 06:00:44 compute-0 ceph-mon[75840]: pgmap v1239: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:00:44 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:00:44 compute-0 podman[273688]: 2025-11-22 06:00:44.248157453 +0000 UTC m=+0.056569087 container create 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 06:00:44 compute-0 systemd[1]: Started libpod-conmon-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope.
Nov 22 06:00:44 compute-0 podman[273688]: 2025-11-22 06:00:44.229095973 +0000 UTC m=+0.037507627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:44 compute-0 podman[273688]: 2025-11-22 06:00:44.345036529 +0000 UTC m=+0.153448233 container init 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:00:44 compute-0 podman[273688]: 2025-11-22 06:00:44.360225866 +0000 UTC m=+0.168637510 container start 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 06:00:44 compute-0 podman[273688]: 2025-11-22 06:00:44.364815909 +0000 UTC m=+0.173227563 container attach 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 06:00:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:45 compute-0 quizzical_banach[273705]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:00:45 compute-0 quizzical_banach[273705]: --> relative data size: 1.0
Nov 22 06:00:45 compute-0 quizzical_banach[273705]: --> All data devices are unavailable
Nov 22 06:00:45 compute-0 systemd[1]: libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Deactivated successfully.
Nov 22 06:00:45 compute-0 systemd[1]: libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Consumed 1.109s CPU time.
Nov 22 06:00:45 compute-0 conmon[273705]: conmon 3aaccf2111adfda7ba2d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope/container/memory.events
Nov 22 06:00:45 compute-0 podman[273688]: 2025-11-22 06:00:45.5112869 +0000 UTC m=+1.319698554 container died 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 06:00:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd053aa6421929165384374f93e7b53b9010ed49bf86eef17aa8cdc3d22664a8-merged.mount: Deactivated successfully.
Nov 22 06:00:45 compute-0 podman[273688]: 2025-11-22 06:00:45.592052044 +0000 UTC m=+1.400463658 container remove 3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 06:00:45 compute-0 systemd[1]: libpod-conmon-3aaccf2111adfda7ba2d1ef41151690673fc1df07a707f83a9176cf82f52987f.scope: Deactivated successfully.
Nov 22 06:00:45 compute-0 sudo[273582]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:45 compute-0 sudo[273746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:45 compute-0 sudo[273746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:45 compute-0 sudo[273746]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:45 compute-0 sudo[273771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:00:45 compute-0 sudo[273771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:45 compute-0 sudo[273771]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:45 compute-0 sudo[273796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:45 compute-0 sudo[273796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:45 compute-0 sudo[273796]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:46 compute-0 sudo[273821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:00:46 compute-0 sudo[273821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:46 compute-0 ceph-mon[75840]: pgmap v1240: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.377671886 +0000 UTC m=+0.046625650 container create fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 06:00:46 compute-0 systemd[1]: Started libpod-conmon-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope.
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.356987602 +0000 UTC m=+0.025941386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.471757197 +0000 UTC m=+0.140711041 container init fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.479187227 +0000 UTC m=+0.148141011 container start fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.483355708 +0000 UTC m=+0.152309502 container attach fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 06:00:46 compute-0 lucid_jennings[273906]: 167 167
Nov 22 06:00:46 compute-0 systemd[1]: libpod-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope: Deactivated successfully.
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.488080555 +0000 UTC m=+0.157034339 container died fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:00:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1219d5504e1f982af8c601e9827de70b292c9cbbdaef6ad1ba0f30e0f922fa9-merged.mount: Deactivated successfully.
Nov 22 06:00:46 compute-0 podman[273889]: 2025-11-22 06:00:46.53417864 +0000 UTC m=+0.203132434 container remove fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jennings, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:00:46 compute-0 systemd[1]: libpod-conmon-fdcc035b898b4706d4b95ebdebaf86a20d9e23e629e560b6f72efe0726c27d1b.scope: Deactivated successfully.
Nov 22 06:00:46 compute-0 podman[273930]: 2025-11-22 06:00:46.731794145 +0000 UTC m=+0.068861856 container create f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 06:00:46 compute-0 systemd[1]: Started libpod-conmon-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope.
Nov 22 06:00:46 compute-0 podman[273930]: 2025-11-22 06:00:46.70436312 +0000 UTC m=+0.041430871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:46 compute-0 podman[273930]: 2025-11-22 06:00:46.842572194 +0000 UTC m=+0.179639915 container init f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 06:00:46 compute-0 podman[273930]: 2025-11-22 06:00:46.856790535 +0000 UTC m=+0.193858276 container start f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 06:00:46 compute-0 podman[273930]: 2025-11-22 06:00:46.861631865 +0000 UTC m=+0.198699696 container attach f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:00:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:00:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:00:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:00:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:00:47 compute-0 boring_turing[273947]: {
Nov 22 06:00:47 compute-0 boring_turing[273947]:     "0": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:         {
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "devices": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "/dev/loop3"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             ],
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_name": "ceph_lv0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_size": "21470642176",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "name": "ceph_lv0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "tags": {
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_name": "ceph",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.crush_device_class": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.encrypted": "0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_id": "0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.vdo": "0"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             },
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "vg_name": "ceph_vg0"
Nov 22 06:00:47 compute-0 boring_turing[273947]:         }
Nov 22 06:00:47 compute-0 boring_turing[273947]:     ],
Nov 22 06:00:47 compute-0 boring_turing[273947]:     "1": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:         {
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "devices": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "/dev/loop4"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             ],
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_name": "ceph_lv1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_size": "21470642176",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "name": "ceph_lv1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "tags": {
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_name": "ceph",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.crush_device_class": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.encrypted": "0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_id": "1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.vdo": "0"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             },
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "vg_name": "ceph_vg1"
Nov 22 06:00:47 compute-0 boring_turing[273947]:         }
Nov 22 06:00:47 compute-0 boring_turing[273947]:     ],
Nov 22 06:00:47 compute-0 boring_turing[273947]:     "2": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:         {
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "devices": [
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "/dev/loop5"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             ],
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_name": "ceph_lv2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_size": "21470642176",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "name": "ceph_lv2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "tags": {
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.cluster_name": "ceph",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.crush_device_class": "",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.encrypted": "0",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osd_id": "2",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:                 "ceph.vdo": "0"
Nov 22 06:00:47 compute-0 boring_turing[273947]:             },
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "type": "block",
Nov 22 06:00:47 compute-0 boring_turing[273947]:             "vg_name": "ceph_vg2"
Nov 22 06:00:47 compute-0 boring_turing[273947]:         }
Nov 22 06:00:47 compute-0 boring_turing[273947]:     ]
Nov 22 06:00:47 compute-0 boring_turing[273947]: }
Nov 22 06:00:47 compute-0 systemd[1]: libpod-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope: Deactivated successfully.
Nov 22 06:00:47 compute-0 podman[273956]: 2025-11-22 06:00:47.771003182 +0000 UTC m=+0.031821693 container died f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c7d39bfeb80c173315689091a4b808944c905b54c6503f5609e3940b531eb3-merged.mount: Deactivated successfully.
Nov 22 06:00:47 compute-0 podman[273956]: 2025-11-22 06:00:47.853661537 +0000 UTC m=+0.114479978 container remove f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Nov 22 06:00:47 compute-0 systemd[1]: libpod-conmon-f6779922179b6d2f7ced85aa85179b0f9073fc95c29419f3792330cd3f73f126.scope: Deactivated successfully.
Nov 22 06:00:47 compute-0 sudo[273821]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:48 compute-0 sudo[273971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:48 compute-0 sudo[273971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:48 compute-0 sudo[273971]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:48 compute-0 ceph-mon[75840]: pgmap v1241: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:00:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/42910641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:00:48 compute-0 sudo[273996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:00:48 compute-0 sudo[273996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:48 compute-0 sudo[273996]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:48 compute-0 sudo[274021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:48 compute-0 sudo[274021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:48 compute-0 sudo[274021]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:48 compute-0 sudo[274046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:00:48 compute-0 sudo[274046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.68931047 +0000 UTC m=+0.056349361 container create b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:00:48 compute-0 systemd[1]: Started libpod-conmon-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope.
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.663994382 +0000 UTC m=+0.031033363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.789261249 +0000 UTC m=+0.156300240 container init b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.801331662 +0000 UTC m=+0.168370553 container start b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.804828536 +0000 UTC m=+0.171867517 container attach b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 06:00:48 compute-0 brave_noyce[274130]: 167 167
Nov 22 06:00:48 compute-0 systemd[1]: libpod-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope: Deactivated successfully.
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.80985606 +0000 UTC m=+0.176894981 container died b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 22 06:00:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0081c3e54f9d4b89f8d237b5b1f1d8caf58038075dc743022fe3e26ce1a9a25-merged.mount: Deactivated successfully.
Nov 22 06:00:48 compute-0 podman[274113]: 2025-11-22 06:00:48.86545681 +0000 UTC m=+0.232495731 container remove b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_noyce, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 06:00:48 compute-0 systemd[1]: libpod-conmon-b3da9513c0d825f1fa6ad6a963696a9374d966660fd5b23a1087a1981926f487.scope: Deactivated successfully.
Nov 22 06:00:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:49 compute-0 podman[274154]: 2025-11-22 06:00:49.081577561 +0000 UTC m=+0.056843324 container create 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:00:49 compute-0 systemd[1]: Started libpod-conmon-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope.
Nov 22 06:00:49 compute-0 podman[274154]: 2025-11-22 06:00:49.05614805 +0000 UTC m=+0.031413903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:00:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:00:49 compute-0 podman[274154]: 2025-11-22 06:00:49.184227202 +0000 UTC m=+0.159492965 container init 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 06:00:49 compute-0 podman[274154]: 2025-11-22 06:00:49.19236533 +0000 UTC m=+0.167631093 container start 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:00:49 compute-0 podman[274154]: 2025-11-22 06:00:49.195874354 +0000 UTC m=+0.171140117 container attach 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 06:00:50 compute-0 ceph-mon[75840]: pgmap v1242: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:50 compute-0 eager_bardeen[274170]: {
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_id": 1,
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "type": "bluestore"
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     },
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_id": 2,
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "type": "bluestore"
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     },
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_id": 0,
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:         "type": "bluestore"
Nov 22 06:00:50 compute-0 eager_bardeen[274170]:     }
Nov 22 06:00:50 compute-0 eager_bardeen[274170]: }
Nov 22 06:00:50 compute-0 systemd[1]: libpod-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Deactivated successfully.
Nov 22 06:00:50 compute-0 systemd[1]: libpod-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Consumed 1.028s CPU time.
Nov 22 06:00:50 compute-0 podman[274204]: 2025-11-22 06:00:50.841588003 +0000 UTC m=+0.029936863 container died 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:00:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-175fbd58af1e7f828e60fe3aeca38f17ba57ed05dd6502a8a574bf464a9e86c2-merged.mount: Deactivated successfully.
Nov 22 06:00:50 compute-0 podman[274204]: 2025-11-22 06:00:50.902574148 +0000 UTC m=+0.090922938 container remove 57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:00:50 compute-0 systemd[1]: libpod-conmon-57176eeed826230dfb85c490b55e07afe29b40c65e57320fa1ea6b579e5d3a39.scope: Deactivated successfully.
Nov 22 06:00:50 compute-0 sudo[274046]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:00:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:00:50 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:50 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 4e956076-3567-4abb-92c4-5ce3e49019cc does not exist
Nov 22 06:00:50 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a4aecf8e-7765-4ca9-b37c-565eb916d38b does not exist
Nov 22 06:00:51 compute-0 sudo[274218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:00:51 compute-0 sudo[274218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:51 compute-0 sudo[274218]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:51 compute-0 sudo[274243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:00:51 compute-0 sudo[274243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:00:51 compute-0 sudo[274243]: pam_unix(sudo:session): session closed for user root
Nov 22 06:00:51 compute-0 ceph-mon[75840]: pgmap v1243: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:51 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:00:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:00:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:00:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:53 compute-0 ceph-mon[75840]: pgmap v1244: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:55 compute-0 ceph-mon[75840]: pgmap v1245: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:57 compute-0 ceph-mon[75840]: pgmap v1246: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:00:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:00:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:00 compute-0 ceph-mon[75840]: pgmap v1247: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:01 compute-0 CROND[274269]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 06:01:01 compute-0 run-parts[274272]: (/etc/cron.hourly) starting 0anacron
Nov 22 06:01:02 compute-0 run-parts[274278]: (/etc/cron.hourly) finished 0anacron
Nov 22 06:01:02 compute-0 CROND[274268]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 06:01:02 compute-0 ceph-mon[75840]: pgmap v1248: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:04 compute-0 ceph-mon[75840]: pgmap v1249: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:04 compute-0 podman[274279]: 2025-11-22 06:01:04.25221348 +0000 UTC m=+0.112287540 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 06:01:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:05 compute-0 ceph-mon[75840]: pgmap v1250: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:08 compute-0 ceph-mon[75840]: pgmap v1251: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:10 compute-0 ceph-mon[75840]: pgmap v1252: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:12 compute-0 ceph-mon[75840]: pgmap v1253: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:13 compute-0 podman[274305]: 2025-11-22 06:01:13.236593268 +0000 UTC m=+0.086418034 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:01:13 compute-0 podman[274306]: 2025-11-22 06:01:13.262895572 +0000 UTC m=+0.106583373 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:01:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b60e5e0>)]
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b663eb0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f536b6634f0>)]
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:01:14 compute-0 ceph-mon[75840]: pgmap v1254: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.073686) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274073765, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2525, "num_deletes": 513, "total_data_size": 3522358, "memory_usage": 3574864, "flush_reason": "Manual Compaction"}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274113643, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3253120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26102, "largest_seqno": 28626, "table_properties": {"data_size": 3242090, "index_size": 6564, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27926, "raw_average_key_size": 20, "raw_value_size": 3217445, "raw_average_value_size": 2408, "num_data_blocks": 288, "num_entries": 1336, "num_filter_entries": 1336, "num_deletions": 513, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791083, "oldest_key_time": 1763791083, "file_creation_time": 1763791274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 40004 microseconds, and 8882 cpu microseconds.
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.113702) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3253120 bytes OK
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.113727) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118688) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118714) EVENT_LOG_v1 {"time_micros": 1763791274118706, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.118737) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3510434, prev total WAL file size 3510434, number of live WAL files 2.
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.120803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3176KB)], [59(9531KB)]
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274120927, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13013634, "oldest_snapshot_seqno": -1}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5838 keys, 8435102 bytes, temperature: kUnknown
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274226303, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8435102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8395561, "index_size": 23815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 145951, "raw_average_key_size": 25, "raw_value_size": 8290366, "raw_average_value_size": 1420, "num_data_blocks": 977, "num_entries": 5838, "num_filter_entries": 5838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.226675) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8435102 bytes
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.228391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.4 rd, 80.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.6) write-amplify(2.6) OK, records in: 6859, records dropped: 1021 output_compression: NoCompression
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.228420) EVENT_LOG_v1 {"time_micros": 1763791274228406, "job": 32, "event": "compaction_finished", "compaction_time_micros": 105496, "compaction_time_cpu_micros": 40425, "output_level": 6, "num_output_files": 1, "total_output_size": 8435102, "num_input_records": 6859, "num_output_records": 5838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274229603, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791274232865, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.120695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:01:14.232949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:01:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:15 compute-0 ceph-mon[75840]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.mscchl(active, since 36m)
Nov 22 06:01:16 compute-0 ceph-mon[75840]: pgmap v1255: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:16 compute-0 ceph-mon[75840]: mgrmap e16: compute-0.mscchl(active, since 36m)
Nov 22 06:01:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 22 06:01:17 compute-0 nova_compute[255660]: 2025-11-22 06:01:17.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:17 compute-0 ceph-mon[75840]: pgmap v1256: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Nov 22 06:01:18 compute-0 sshd-session[274344]: Accepted publickey for zuul from 192.168.122.10 port 56488 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 06:01:18 compute-0 systemd-logind[798]: New session 51 of user zuul.
Nov 22 06:01:18 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.290 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.291 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:01:18 compute-0 sshd-session[274344]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.292 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:01:18 compute-0 sudo[274349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 22 06:01:18 compute-0 sudo[274349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 06:01:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:01:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2361735710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.728 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:01:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2361735710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.887 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:01:18 compute-0 nova_compute[255660]: 2025-11-22 06:01:18.888 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:01:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.774 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.774 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.854 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 06:01:19 compute-0 ceph-mon[75840]: pgmap v1257: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.946 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.947 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.966 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 06:01:19 compute-0 nova_compute[255660]: 2025-11-22 06:01:19.995 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.022 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:01:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:01:20 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191580808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.515 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.522 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.552 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.554 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:01:20 compute-0 nova_compute[255660]: 2025-11-22 06:01:20.554 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:01:20 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4191580808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:01:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14509 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:21 compute-0 ceph-mon[75840]: pgmap v1258: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14511 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 06:01:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678055306' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:01:22 compute-0 ceph-mon[75840]: from='client.14509 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:22 compute-0 ceph-mon[75840]: from='client.14511 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2678055306' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:01:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:23 compute-0 ceph-mon[75840]: pgmap v1259: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:26 compute-0 ceph-mon[75840]: pgmap v1260: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:26 compute-0 nova_compute[255660]: 2025-11-22 06:01:26.551 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:26 compute-0 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:26 compute-0 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:26 compute-0 nova_compute[255660]: 2025-11-22 06:01:26.578 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:01:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:27 compute-0 nova_compute[255660]: 2025-11-22 06:01:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:27 compute-0 nova_compute[255660]: 2025-11-22 06:01:27.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:28 compute-0 ceph-mon[75840]: pgmap v1261: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 22 06:01:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 22 06:01:29 compute-0 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:29 compute-0 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:29 compute-0 nova_compute[255660]: 2025-11-22 06:01:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:30 compute-0 ceph-mon[75840]: pgmap v1262: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Nov 22 06:01:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:32 compute-0 ceph-mon[75840]: pgmap v1263: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:32 compute-0 nova_compute[255660]: 2025-11-22 06:01:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:01:32 compute-0 nova_compute[255660]: 2025-11-22 06:01:32.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:01:32 compute-0 nova_compute[255660]: 2025-11-22 06:01:32.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:01:32 compute-0 nova_compute[255660]: 2025-11-22 06:01:32.156 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:01:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:34 compute-0 ceph-mon[75840]: pgmap v1264: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:35 compute-0 podman[274669]: 2025-11-22 06:01:35.263142644 +0000 UTC m=+0.114500015 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 06:01:36 compute-0 ceph-mon[75840]: pgmap v1265: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.941 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:01:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.942 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:01:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:01:36.942 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:01:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:37 compute-0 ceph-mon[75840]: pgmap v1266: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:38 compute-0 ovs-vsctl[274745]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 06:01:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:39 compute-0 ceph-mon[75840]: pgmap v1267: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:39 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 06:01:39 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 06:01:39 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 06:01:40 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: cache status {prefix=cache status} (starting...)
Nov 22 06:01:40 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: client ls {prefix=client ls} (starting...)
Nov 22 06:01:40 compute-0 lvm[275104]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 06:01:40 compute-0 lvm[275104]: VG ceph_vg1 finished
Nov 22 06:01:40 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:40 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 06:01:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 06:01:41 compute-0 lvm[275144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 06:01:41 compute-0 lvm[275144]: VG ceph_vg2 finished
Nov 22 06:01:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:41 compute-0 lvm[275155]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 06:01:41 compute-0 lvm[275155]: VG ceph_vg0 finished
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 06:01:41 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:41 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:01:41 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:41.868+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:01:41 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 06:01:42 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 06:01:42 compute-0 ceph-mon[75840]: from='client.14515 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mon[75840]: pgmap v1268: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:42 compute-0 ceph-mon[75840]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 06:01:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333229747' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: ops {prefix=ops} (starting...)
Nov 22 06:01:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 06:01:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217697821' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:01:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765355900' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 06:01:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119397056' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:01:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:42 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 06:01:42 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864016272' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session ls {prefix=session ls} (starting...)
Nov 22 06:01:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 06:01:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881267314' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: status {prefix=status} (starting...)
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.14521 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3333229747' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4217697821' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/765355900' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/119397056' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: pgmap v1269: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:43 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1864016272' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 06:01:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/738643034' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 06:01:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578060366' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:01:43
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root']
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:01:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:01:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 06:01:43 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1990816085' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:01:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1881267314' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/738643034' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2578060366' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: from='client.14539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1990816085' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14543 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:44 compute-0 podman[275580]: 2025-11-22 06:01:44.232643398 +0000 UTC m=+0.093572395 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 06:01:44 compute-0 podman[275582]: 2025-11-22 06:01:44.243367336 +0000 UTC m=+0.103295046 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 06:01:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 06:01:44 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/187603636' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 06:01:44 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948242793' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 06:01:44 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256204332' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:01:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 06:01:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388120625' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mon[75840]: from='client.14543 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/187603636' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1948242793' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4256204332' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mon[75840]: pgmap v1270: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:45 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2388120625' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14555 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 06:01:45 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:45.571+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 06:01:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:45 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 06:01:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982758084' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 06:01:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956909896' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: from='client.14555 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: from='client.14557 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3982758084' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:42.489932+0000 osd.2 (osd.2) 116 : cluster [DBG] 10.3 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:42.507711+0000 osd.2 (osd.2) 117 : cluster [DBG] 10.3 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 1769472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 117) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:42.489932+0000 osd.2 (osd.2) 116 : cluster [DBG] 10.3 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:42.507711+0000 osd.2 (osd.2) 117 : cluster [DBG] 10.3 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:13.610858+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66330624 unmapped: 1769472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812521 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:14.611046+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:15.611272+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:16.611524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:46.423378+0000 osd.2 (osd.2) 118 : cluster [DBG] 10.5 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:46.437622+0000 osd.2 (osd.2) 119 : cluster [DBG] 10.5 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:17.611782+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 119) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:46.423378+0000 osd.2 (osd.2) 118 : cluster [DBG] 10.5 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:46.437622+0000 osd.2 (osd.2) 119 : cluster [DBG] 10.5 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:18.611921+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813669 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:19.612081+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843079567s of 12.888220787s, submitted: 10
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:20.612459+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:50.476392+0000 osd.2 (osd.2) 120 : cluster [DBG] 10.a deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:50.490572+0000 osd.2 (osd.2) 121 : cluster [DBG] 10.a deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 121) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:50.476392+0000 osd.2 (osd.2) 120 : cluster [DBG] 10.a deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:50.490572+0000 osd.2 (osd.2) 121 : cluster [DBG] 10.a deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:21.612712+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:51.438638+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:51.452789+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 123) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:51.438638+0000 osd.2 (osd.2) 122 : cluster [DBG] 10.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:51.452789+0000 osd.2 (osd.2) 123 : cluster [DBG] 10.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:22.613021+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:52.489211+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:52.503313+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 125) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:52.489211+0000 osd.2 (osd.2) 124 : cluster [DBG] 10.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:52.503313+0000 osd.2 (osd.2) 125 : cluster [DBG] 10.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:23.613301+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66363392 unmapped: 1736704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817114 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:24.613505+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:25.613672+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:26.613850+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:27.614003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66379776 unmapped: 1720320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:28.614130+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:58.417592+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:58.431670+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 127) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:58.417592+0000 osd.2 (osd.2) 126 : cluster [DBG] 10.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:58.431670+0000 osd.2 (osd.2) 127 : cluster [DBG] 10.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 1712128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 818263 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:29.614463+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:59.422262+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:29:59.436418+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 129) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:59.422262+0000 osd.2 (osd.2) 128 : cluster [DBG] 10.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:29:59.436418+0000 osd.2 (osd.2) 129 : cluster [DBG] 10.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:30.614751+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:00.457888+0000 osd.2 (osd.2) 130 : cluster [DBG] 10.1d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:00.472028+0000 osd.2 (osd.2) 131 : cluster [DBG] 10.1d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 131) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:00.457888+0000 osd.2 (osd.2) 130 : cluster [DBG] 10.1d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:00.472028+0000 osd.2 (osd.2) 131 : cluster [DBG] 10.1d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66404352 unmapped: 1695744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:31.615159+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 1687552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:32.615454+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 1679360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:33.615610+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66420736 unmapped: 1679360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820561 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:34.615907+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66428928 unmapped: 1671168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:35.616108+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.874602318s of 15.915491104s, submitted: 12
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:36.616257+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:06.391876+0000 osd.2 (osd.2) 132 : cluster [DBG] 10.1f deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:06.409546+0000 osd.2 (osd.2) 133 : cluster [DBG] 10.1f deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 133) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:06.391876+0000 osd.2 (osd.2) 132 : cluster [DBG] 10.1f deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:06.409546+0000 osd.2 (osd.2) 133 : cluster [DBG] 10.1f deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:37.616652+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 1638400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:38.616805+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:08.334861+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:08.348977+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 135) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:08.334861+0000 osd.2 (osd.2) 134 : cluster [DBG] 11.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:08.348977+0000 osd.2 (osd.2) 135 : cluster [DBG] 11.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 1638400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824007 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:39.617124+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:09.377254+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:09.391404+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 137) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:09.377254+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:09.391404+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:40.617541+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:41.617733+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:11.312897+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:11.326916+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 139) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:11.312897+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:11.326916+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:42.617987+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:12.338779+0000 osd.2 (osd.2) 140 : cluster [DBG] 8.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:12.352904+0000 osd.2 (osd.2) 141 : cluster [DBG] 8.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 141) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:12.338779+0000 osd.2 (osd.2) 140 : cluster [DBG] 8.15 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:12.352904+0000 osd.2 (osd.2) 141 : cluster [DBG] 8.15 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:43.618229+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:13.334851+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:13.348980+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 143) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:13.334851+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.d scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:13.348980+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.d scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828597 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:44.618422+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:14.331940+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:14.346145+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 145) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:14.331940+0000 osd.2 (osd.2) 144 : cluster [DBG] 8.2 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:14.346145+0000 osd.2 (osd.2) 145 : cluster [DBG] 8.2 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:45.618896+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:15.315038+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.3 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:15.329261+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.3 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 147) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:15.315038+0000 osd.2 (osd.2) 146 : cluster [DBG] 11.3 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:15.329261+0000 osd.2 (osd.2) 147 : cluster [DBG] 11.3 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:46.619186+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:16.325872+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:16.339999+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 149) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:16.325872+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:16.339999+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.916749001s of 10.982564926s, submitted: 18
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:47.619854+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:17.374390+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.9 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:17.392045+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.9 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 151) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:17.374390+0000 osd.2 (osd.2) 150 : cluster [DBG] 11.9 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:17.392045+0000 osd.2 (osd.2) 151 : cluster [DBG] 11.9 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:48.620088+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:18.377007+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:18.391136+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 153) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:18.377007+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:18.391136+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833189 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:49.620236+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:50.620355+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:20.343456+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:20.357587+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 155) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:20.343456+0000 osd.2 (osd.2) 154 : cluster [DBG] 8.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:20.357587+0000 osd.2 (osd.2) 155 : cluster [DBG] 8.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:51.620556+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:52.620699+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:53.620873+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834337 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:54.621023+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:55.621298+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:25.252287+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:25.266212+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 157) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:25.252287+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:25.266212+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:56.621550+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:57.621737+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:58.621981+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835486 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:59.622145+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.905089378s of 12.933971405s, submitted: 8
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:00.622339+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:30.308503+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:30.322591+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 159) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:30.308503+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1b scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:30.322591+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1b scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:01.622688+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:31.263141+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:31.277200+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 161) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:31.263141+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:31.277200+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:02.622947+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:03.623200+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:33.257916+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:33.272032+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 163) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:33.257916+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:33.272032+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838933 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:04.623521+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:05.623791+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:06.623994+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:07.624224+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:37.183355+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.1f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:37.197383+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.1f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 165) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:37.183355+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.1f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:37.197383+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.1f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:08.624427+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840082 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:09.624666+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:10.624906+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:11.625107+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:12.625296+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.852154732s of 12.882711411s, submitted: 8
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:13.625525+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:43.191194+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.4 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:43.205317+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.4 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 167) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:43.191194+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.4 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:43.205317+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.4 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841229 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:14.625721+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:15.625838+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:16.625960+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:17.626152+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:18.626273+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842378 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:19.626432+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:49.168915+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.1a scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:49.183024+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.1a scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 169) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:49.168915+0000 osd.2 (osd.2) 168 : cluster [DBG] 11.1a scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:49.183024+0000 osd.2 (osd.2) 169 : cluster [DBG] 11.1a scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 1515520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:20.626639+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:21.626796+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:51.160317+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:51.174524+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 171) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:51.160317+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:51.174524+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:22.627039+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:52.151153+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:52.165255+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 173) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:52.151153+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.1c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:52.165255+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.1c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:23.627247+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:53.132354+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.12 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:53.146532+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.12 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.884706497s of 10.928488731s, submitted: 10
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 175) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:53.132354+0000 osd.2 (osd.2) 174 : cluster [DBG] 11.12 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:53.146532+0000 osd.2 (osd.2) 175 : cluster [DBG] 11.12 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 1482752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846972 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:24.627952+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:54.119586+0000 osd.2 (osd.2) 176 : cluster [DBG] 8.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:54.133724+0000 osd.2 (osd.2) 177 : cluster [DBG] 8.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 177) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:54.119586+0000 osd.2 (osd.2) 176 : cluster [DBG] 8.11 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:54.133724+0000 osd.2 (osd.2) 177 : cluster [DBG] 8.11 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:25.628771+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:55.113747+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.12 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:30:55.127871+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.12 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 179) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:55.113747+0000 osd.2 (osd.2) 178 : cluster [DBG] 8.12 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:30:55.127871+0000 osd.2 (osd.2) 179 : cluster [DBG] 8.12 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:26.629244+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:27.629440+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:28.629594+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:29.630011+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848120 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:30.630160+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:31.630397+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:01.015950+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:01.051289+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 181) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:01.015950+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.e deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:01.051289+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.e deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:32.630709+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:33.630866+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:34.631040+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849267 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.922524452s of 10.949849129s, submitted: 6
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:35.631326+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:05.069581+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.6 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:05.104941+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.6 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 1417216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 183) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:05.069581+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.6 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:05.104941+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.6 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:36.631554+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:37.631720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:38.631936+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:39.632109+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850414 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:40.632268+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:41.632404+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:11.042765+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.17 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:11.071001+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.17 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 185) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:11.042765+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.17 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:11.071001+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.17 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:42.632628+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:12.013343+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:12.051791+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 187) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:12.013343+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.f scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:12.051791+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.f scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:43.632869+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:12.992374+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.7 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:13.027640+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.7 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 189) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:12.992374+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.7 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:13.027640+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.7 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:44.633132+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853856 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:45.633242+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.000711441s of 11.032500267s, submitted: 8
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:46.633393+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:16.102120+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:16.148154+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 191) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:16.102120+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.8 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:16.148154+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.8 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:47.633611+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:17.137823+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:17.169667+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 193) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:17.137823+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.18 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:17.169667+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.18 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:48.633853+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:49.633966+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856151 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:50.634180+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:20.119396+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:20.151041+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 195) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:20.119396+0000 osd.2 (osd.2) 194 : cluster [DBG] 9.c scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:20.151041+0000 osd.2 (osd.2) 195 : cluster [DBG] 9.c scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:51.634390+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:52.634574+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:53.634763+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:54.634954+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 1327104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857298 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:55.635141+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:25.166715+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.13 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:25.198346+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.13 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 197) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:25.166715+0000 osd.2 (osd.2) 196 : cluster [DBG] 9.13 deep-scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:25.198346+0000 osd.2 (osd.2) 197 : cluster [DBG] 9.13 deep-scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:56.635331+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:57.635537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:58.635694+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.984521866s of 13.017908096s, submitted: 8
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:59.635861+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:29.120032+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.19 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  will send 2025-11-22T05:31:29.172936+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.19 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client handle_log_ack log(last 199) v1
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:29.120032+0000 osd.2 (osd.2) 198 : cluster [DBG] 9.19 scrub starts
Nov 22 06:01:46 compute-0 ceph-osd[91881]: log_client  logged 2025-11-22T05:31:29.172936+0000 osd.2 (osd.2) 199 : cluster [DBG] 9.19 scrub ok
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:00.636099+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:01.636245+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:02.636403+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:03.636593+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:04.636802+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:05.636929+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:06.637058+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:07.637227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:08.637393+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:09.637555+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:10.637705+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:11.637917+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:12.638091+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:13.638244+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:14.638389+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:15.638528+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:16.638650+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:17.638822+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:18.638966+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:19.639120+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:20.639264+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:21.639376+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:22.639547+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:23.639673+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:24.639824+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:25.639982+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:26.640130+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:27.640334+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:28.640550+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:29.640690+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:30.640842+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 1204224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:31.640981+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:32.641115+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:33.641249+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:34.641382+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:35.641576+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:36.641750+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:37.641983+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:38.642181+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:39.642387+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 1171456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:40.642601+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:41.642786+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:42.642985+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:43.643154+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:44.643309+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:45.643465+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:46.643663+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:47.643892+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:48.644034+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:49.644210+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:50.644384+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:51.644592+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:52.644748+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:53.644906+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:54.645078+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:55.645227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:56.645418+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:57.645696+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:58.645824+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:59.645970+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:00.646114+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:01.646290+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:02.646434+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:03.646607+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:04.646729+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:05.646884+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:06.646964+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:07.647178+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:08.647347+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:09.647510+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:10.647660+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:11.647813+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:12.647979+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:13.648172+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:14.648369+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:15.648549+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:16.648691+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:17.648914+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:18.649110+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:19.649282+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:20.649441+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:21.649654+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:22.649848+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:23.650043+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:24.650230+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:25.650405+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:26.650583+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:27.650738+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:28.650889+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:29.651030+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:30.651160+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:31.651301+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:32.651446+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:33.651536+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:34.651697+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:35.651842+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:36.652069+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:37.652284+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:38.652463+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:39.652641+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:40.652841+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:41.652991+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:42.653109+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:43.653279+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:44.653438+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:45.653602+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:46.653786+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:47.653974+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:48.654153+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:49.654315+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:50.654567+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:51.654717+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:52.654884+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:53.655548+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:54.655813+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:55.655975+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:56.656116+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:57.656336+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:58.656481+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:59.656608+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:00.656785+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:01.656938+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:02.657076+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:03.657242+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:04.657426+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:05.657596+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:06.657765+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:07.657901+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:08.658062+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:09.658206+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:10.658351+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:11.658564+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:12.658729+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:13.658880+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:14.659079+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:15.659223+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:16.659380+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:17.659588+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:18.659746+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:19.659893+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:20.660042+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:21.660195+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:22.660426+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:23.660586+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:24.660775+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:25.660976+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:26.661194+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:27.661372+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:28.661545+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:29.661701+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:30.661897+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:31.662036+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:32.662234+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:33.662411+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:34.662567+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:35.662721+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:36.662916+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:37.663076+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:38.663226+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:39.663373+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:40.663553+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:41.663722+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:42.663894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:43.664066+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:44.664241+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:45.664381+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:46.664523+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:47.664705+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:48.664881+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:49.665027+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:50.665167+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:51.665307+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:52.665553+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:53.665722+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:54.665906+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:55.666065+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:56.666211+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:57.666511+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:58.666664+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:59.666789+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:00.666947+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:01.667084+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:02.667265+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:03.667431+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:04.667567+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:05.667752+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:06.667891+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:07.668054+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:08.668169+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:09.668291+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:10.668458+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:11.668657+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:12.668774+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:13.668996+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:14.669233+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:15.669401+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:16.669628+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:17.669799+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:18.669967+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:19.670111+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:20.670251+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:21.670399+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:22.670548+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:23.670697+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:24.670866+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:25.671025+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:26.671145+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:27.671337+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:28.671537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:29.671766+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:30.671971+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:31.672139+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:32.672265+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:33.672421+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:34.672610+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:35.672768+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:36.672924+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:37.673071+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:38.673192+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:39.673335+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:40.673532+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:41.673724+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:42.673840+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:43.674027+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:44.674179+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:45.674280+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:46.674427+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:47.674595+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:48.674782+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:49.674924+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:50.675044+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:51.675156+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:52.675309+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:53.675444+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:54.675538+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:55.675699+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:56.675878+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:57.676064+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:58.676266+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:59.676674+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:00.676852+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:01.677012+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:02.677196+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:03.677310+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:04.677435+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:05.677612+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:06.677799+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:07.678028+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:08.678243+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:09.678409+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:10.678654+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:11.678844+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:12.679177+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:13.679334+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:14.679645+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:15.679945+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:16.680076+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:17.680309+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:18.680506+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:19.680715+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:20.680885+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:21.681251+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:22.681437+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5569 writes, 23K keys, 5569 commit groups, 1.0 writes per commit group, ingest: 18.55 MB, 0.03 MB/s
                                           Interval WAL: 5569 writes, 822 syncs, 6.77 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:23.681598+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:24.681926+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:25.682117+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:26.682343+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:27.682607+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:28.682742+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:29.682945+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:30.683090+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:31.683243+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:32.683389+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:33.683702+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:34.683880+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:35.684021+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:36.684368+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:37.684570+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:38.684803+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:39.684996+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:40.685213+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:41.685338+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:42.685524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:43.685722+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:44.685935+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:45.686097+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:46.686247+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:47.686391+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:48.686552+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:49.686801+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:50.687085+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:51.687197+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:52.687357+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:53.687556+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:54.687677+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:55.687783+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:56.687883+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 212992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:57.688029+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:58.688203+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:59.688355+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:00.688544+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:01.688690+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:02.688841+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:03.689003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:04.689220+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:05.689443+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:06.689562+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:07.689719+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:08.689855+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:09.690015+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:10.690244+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:11.690379+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:12.690557+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:13.690716+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:14.690838+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:15.690986+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:16.691163+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:17.691332+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:18.691529+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:19.691701+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:20.691835+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:21.691988+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:22.692133+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:23.692278+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:24.692422+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:25.692576+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:26.692726+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:27.692908+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:28.693038+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:29.693153+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:30.693307+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:31.693461+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:32.693680+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:33.693827+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:34.693954+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.694178+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.694347+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.694519+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.694661+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.694797+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.694951+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.695173+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.695340+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.695544+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.695701+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 345.672515869s of 345.679687500s, submitted: 2
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.695855+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.695970+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 1769472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.696129+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.696301+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.696520+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.697558+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.697639+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1662976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.698313+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.698595+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.698867+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.699141+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.699328+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.699516+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.699778+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.699920+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.700079+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.700225+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.700373+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.700545+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.700794+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.700945+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.701205+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 1613824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.701511+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.701622+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.701763+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.701966+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.702160+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.702353+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.702460+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.702614+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.702757+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.702888+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.703098+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.703241+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.703434+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.703543+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.703686+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.703838+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.704003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.704169+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.704352+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.704524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.704711+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.704893+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.705035+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.705153+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.705359+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.705581+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.705765+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.705962+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.706099+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.706264+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.706415+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.706572+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.706703+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.706836+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.706988+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.707131+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.707297+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.707462+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.707640+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.707955+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.708125+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.708281+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.708434+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.708554+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.708706+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.708844+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.709019+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.709152+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.709328+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.709520+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.709727+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.709852+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.709971+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.710118+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.710274+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.710462+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.710639+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.710761+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.710894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.711021+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.711194+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.711329+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.711504+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.711648+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.711797+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.711978+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.712118+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.712252+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.712368+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.712534+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.712704+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.712827+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.712958+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.713106+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.713267+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.713403+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.713537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.713685+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.713837+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.713990+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.714173+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.714322+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.714420+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.714512+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.714658+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.714778+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.714954+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.715078+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.715227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.715374+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.715533+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.715696+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.715853+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.715979+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.716077+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.716196+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.716319+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.716502+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.716614+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.716760+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.716951+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.717679+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.717839+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.717999+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.718137+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.718290+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.718530+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.718726+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.718939+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.719082+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.719277+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.719530+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.719782+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.720205+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.720342+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.720547+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.720720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.720894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.721023+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.721138+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.721660+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.721799+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.722061+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.722213+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.722345+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.722627+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.722781+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.723071+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.723255+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.723405+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.723596+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.723813+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.724016+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.724151+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.724324+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.724548+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.724729+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.724923+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.725066+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.725194+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.725380+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.725530+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.725717+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.725863+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.726003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.726141+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.726267+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.726409+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.726558+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.726698+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.726857+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.727003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.727178+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.727336+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.727606+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.727723+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.727843+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.728029+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.728166+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.728312+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.728504+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.728728+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.728899+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.729154+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.729313+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.729529+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.729701+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.729957+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.730093+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.730230+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.730417+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.730552+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.730734+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.730913+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.731103+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.731247+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.731409+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.731557+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.731732+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.731890+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.732051+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.732211+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.732338+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.732517+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.732667+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.732809+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.732928+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.733103+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.733253+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.733559+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.733842+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.734003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.734125+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.734222+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.734369+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.734577+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.734720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.734841+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.735017+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.735201+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.735382+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.735548+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.735712+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.735816+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27ae5fc00
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: get_auth_request con 0x55c27c775400 auth_method 0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.736006+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.736181+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.736349+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.736655+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.736784+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.736912+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.737116+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.737270+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.737431+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.737550+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.737668+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.737831+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.737982+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.738163+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.738350+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.738526+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.738705+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.738886+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.739048+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.739251+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.739412+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.739626+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.739759+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.739940+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.740101+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.740396+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.740614+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.740768+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.740883+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.741064+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.741234+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.741365+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.741513+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.741699+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.741912+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.742121+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.742279+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.742428+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.742562+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.742707+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.742869+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.743057+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.743261+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.743459+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.743625+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.743757+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.743969+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.744095+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.744245+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.744532+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.744667+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.744794+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.744923+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.745067+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.745190+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.745320+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.745524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.745658+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.746015+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.746193+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.746313+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.746454+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.746680+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.746917+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.747092+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.747269+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 ms_handle_reset con 0x55c27d491c00 session 0x55c27c84cd20
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd64800
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.747551+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.747717+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.747818+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.747975+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.748382+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.748527+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.748691+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.748872+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.749025+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.749180+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.749353+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.749534+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.749680+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.749834+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.749982+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.750124+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.750239+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.750414+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.750577+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.750731+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.750939+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.751081+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.751258+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.751427+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.751576+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.751736+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.751861+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.752032+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.752200+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.752433+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.752915+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.753103+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.753253+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.753412+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.753539+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.753667+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.753831+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.754020+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.754211+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.754417+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.754607+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.754740+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.754889+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.755012+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.755190+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.755369+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.755576+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.755786+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.755949+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.756124+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.757770+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.757982+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.758141+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.758355+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.758615+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.758882+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.759275+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.759531+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.759759+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.759996+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.760292+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.760521+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.760771+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.760999+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.761262+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.761556+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.761799+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.762069+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.762317+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.762551+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.762770+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.762969+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.763168+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.763341+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.763882+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.764015+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.764176+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.764327+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.764579+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.764872+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.765121+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.765256+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.765392+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.765517+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.765650+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.765853+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.766030+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.766207+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.766379+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.766551+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.766907+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.767101+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.767276+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.767599+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.767744+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.767913+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.768052+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.768220+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.768364+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.768518+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.768675+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.768817+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.768986+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.769142+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.769321+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.769587+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.769844+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.770083+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.770336+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.770583+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.770833+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.771144+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.771332+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.773643+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.773838+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.774067+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.774298+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.774571+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.774821+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.775104+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.775559+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.775789+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.776084+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.776381+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.776693+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.776956+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.777219+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.777537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.777807+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.777986+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.778251+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.778518+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.778694+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.778904+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.779067+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.779210+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.779524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.779719+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.779881+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.780026+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.780373+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.780652+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.780806+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.781105+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.781256+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.781403+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.781654+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.781945+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.782193+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.782404+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.782724+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.782848+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.782973+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.783129+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.783236+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.783347+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.783622+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.783796+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.783950+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.784183+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.784611+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.784773+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.784965+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.387999+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.388129+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.388286+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.388428+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.388544+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.388674+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.388824+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.388967+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.389139+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.389263+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.389371+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.389553+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.389690+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.389825+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.389975+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.390227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.390459+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.390641+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.390771+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.390943+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.391058+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.391212+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.391390+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.391608+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.391740+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.392009+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.392156+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.392592+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.392676+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.393016+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.393191+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.393334+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.393577+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.393706+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.393830+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.393954+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.394089+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.394282+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.394444+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.394703+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.394840+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.394963+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.395109+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.395262+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.395388+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.395524+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.395703+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.395837+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.395950+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.396093+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.396253+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.396506+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.396605+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.396707+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.396821+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.397032+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.397197+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.397351+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.397488+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.397648+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.397807+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.397950+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.398094+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.398975+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.399163+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.399615+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.399896+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.400096+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.400271+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.400563+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.400811+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.401082+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.401363+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.401626+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.401832+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.401992+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.402215+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.402338+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.402547+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.402746+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.402976+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.403220+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.403424+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.403588+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.403758+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.403913+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.404153+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.404359+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.404570+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.404784+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.404933+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.405098+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.405249+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.405615+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.405911+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.406114+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.406362+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.406534+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.406694+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.406902+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.407075+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.407321+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.407508+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.407705+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.407901+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.408121+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.408350+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.408566+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.408871+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.409149+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.409368+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.409616+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.409895+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.410116+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.410285+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.410433+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.410679+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.410894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.411094+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.411246+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.411373+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.411509+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.411666+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.411831+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.412003+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.412178+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.412397+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.412567+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.412784+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.413250+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.413520+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.413651+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.413863+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.414180+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.414406+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.414671+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.414855+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.415090+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.415303+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.415554+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.415779+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.415928+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.416135+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.100036621s of 600.098510742s, submitted: 90
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.416286+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.416420+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.416582+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.416750+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.416912+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.417175+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.417339+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.417537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.417752+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.417962+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.418189+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.418440+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.418590+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.418798+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.418996+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.419175+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.419338+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.419537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.419731+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.419920+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.420072+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.420222+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.420374+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.420566+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.420778+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.420965+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.421099+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.421258+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.421424+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.421597+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.421701+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.421863+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.422015+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.422186+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.422332+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.422532+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.423075+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.423235+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.423386+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.423537+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.423713+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.423865+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.423977+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.424169+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.424330+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.424452+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.424535+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.424691+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.424833+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.424990+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.425350+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.425569+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.425883+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.426653+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.426977+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.427227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.427565+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.427839+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.428069+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.428583+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.429160+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.429421+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.429681+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.430033+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.430365+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.430627+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.430857+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.431000+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.431166+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.431369+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.431568+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.431782+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.432039+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.432276+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.432512+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.432694+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.432894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.433053+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.433221+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.433369+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.433871+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.433964+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.434114+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.434312+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.434442+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.434603+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.434716+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.434894+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.435078+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.435305+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.435563+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.435741+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.435899+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.436055+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.436146+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.436320+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.436443+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.436603+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.436747+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.436901+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.437054+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.437211+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.437398+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.437593+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.437817+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.438020+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.438227+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.438424+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.438646+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.438827+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.439017+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.439197+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.439349+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.439541+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.439721+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.440166+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.440527+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.440757+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.440926+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.441253+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.441544+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.441980+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.442229+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.442447+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.443001+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.443232+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.443596+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.443884+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.444152+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.445149+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.445370+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.445587+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.445791+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.446140+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.446333+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.446566+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.446770+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.446974+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.447236+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.447633+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.447841+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.448057+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.448248+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.448536+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.448729+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.448911+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.449102+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.449297+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.449460+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.449715+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.449876+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.450080+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.450325+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.450536+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.450787+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.451002+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.451236+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.451459+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.451849+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.452086+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.452271+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.452460+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.452722+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.452983+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.453163+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.453365+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.453549+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.453720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.453891+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.454104+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 06:01:46 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220139389' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.454390+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.454625+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.454823+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.455104+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.455393+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.455572+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.455712+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.456119+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.456394+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.457509+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.457677+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.457821+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.458030+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.458259+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.458413+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.458832+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd64c00
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.459052+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.460790+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 187.620025635s of 187.940231323s, submitted: 90
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.460953+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981955 data_alloc: 218103808 data_used: 184320
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 24182784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.461132+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 130 ms_handle_reset con 0x55c27dd64c00 session 0x55c27b9cda40
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24158208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65000
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10bd8da/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,1])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.461271+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba98000/0x0/0x4ffc00000, data 0x10bd90d/0x1185000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.461568+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 131 ms_handle_reset con 0x55c27dd65000 session 0x55c27d99da40
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.461815+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.462040+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991578 data_alloc: 218103808 data_used: 188416
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.462689+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.463079+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.463303+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.463648+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228631973s of 10.493903160s, submitted: 48
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.463938+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.464329+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.464533+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.464747+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.464995+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.465219+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.465428+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.465669+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.465880+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.466067+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.466305+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.466580+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.466758+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.466953+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.467977+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.469150+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993528 data_alloc: 218103808 data_used: 192512
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.470364+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.470642+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.048984528s of 17.207933426s, submitted: 15
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 23986176 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65400
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.470942+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 23969792 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.471239+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 10
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba8c000/0x0/0x4ffc00000, data 0x10c6f88/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 23945216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.471957+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996008 data_alloc: 218103808 data_used: 192512
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 23879680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.472192+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 23617536 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.472763+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.473057+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.473288+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 23306240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.473570+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000508 data_alloc: 218103808 data_used: 192512
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 23371776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.473802+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 23240704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.474006+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 11
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.229097366s of 10.394592285s, submitted: 43
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10e64b7/0x11b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 23126016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.474205+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 23117824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.474383+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 23044096 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.474581+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001568 data_alloc: 218103808 data_used: 192512
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba61000/0x0/0x4ffc00000, data 0x10f0ecd/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 23019520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.474759+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.474904+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.475050+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba57000/0x0/0x4ffc00000, data 0x10fbde4/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 21725184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.475194+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 20561920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.475592+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba4b000/0x0/0x4ffc00000, data 0x110823a/0x11d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008874 data_alloc: 218103808 data_used: 200704
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.475769+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.475930+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060367584s of 10.409746170s, submitted: 65
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.476126+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1116701/0x11e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.476270+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.476423+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007998 data_alloc: 218103808 data_used: 200704
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.476628+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 20340736 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.476892+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x11254f0/0x11f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 20250624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.477117+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 20373504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.477252+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 20299776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.478021+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013872 data_alloc: 218103808 data_used: 208896
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.478246+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.478577+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 20226048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x113fb10/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.478777+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.961433411s of 10.210658073s, submitted: 54
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.478948+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.479150+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010722 data_alloc: 218103808 data_used: 208896
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.479313+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 20078592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.479553+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 20054016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.479720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9fb000/0x0/0x4ffc00000, data 0x1156181/0x1223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 20045824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.479905+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 19922944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9f6000/0x0/0x4ffc00000, data 0x115af0e/0x1228000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.480134+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011714 data_alloc: 218103808 data_used: 208896
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.480338+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa854000/0x0/0x4ffc00000, data 0x115cd59/0x122a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.480549+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.480786+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.710002899s of 10.840860367s, submitted: 28
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.480961+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x1166919/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.481140+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010124 data_alloc: 218103808 data_used: 208896
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.481314+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.481498+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.481642+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.481807+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.482027+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011676 data_alloc: 218103808 data_used: 208896
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.482200+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.482359+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.482531+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 17539072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.881333351s of 10.000229836s, submitted: 24
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.482677+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.482869+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1180b27/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017194 data_alloc: 218103808 data_used: 217088
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.483156+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 16302080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.483332+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0x118a04b/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 16261120 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.483576+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16220160 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.483758+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.483938+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020498 data_alloc: 218103808 data_used: 217088
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.484183+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.484405+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0x119b383/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 16097280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.484584+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.363933563s of 10.000885010s, submitted: 68
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.484728+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.484969+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022282 data_alloc: 218103808 data_used: 217088
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.485101+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7f7000/0x0/0x4ffc00000, data 0x11b5e3b/0x1286000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 16187392 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.485268+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.485413+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 16138240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.485569+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.485720+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027624 data_alloc: 218103808 data_used: 225280
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.485847+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7e0000/0x0/0x4ffc00000, data 0x11ccd00/0x129d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 15966208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.485992+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 14663680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.486108+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.663500786s of 10.003384590s, submitted: 58
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x11f6ec0/0x12c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.486287+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.486518+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030366 data_alloc: 218103808 data_used: 225280
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.486736+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 14467072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa796000/0x0/0x4ffc00000, data 0x1216d90/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.486893+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.487058+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa783000/0x0/0x4ffc00000, data 0x122a36a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.487223+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 13795328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.487407+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 13811712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052002 data_alloc: 218103808 data_used: 233472
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.487582+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 13123584 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.487711+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa71f000/0x0/0x4ffc00000, data 0x12889a8/0x135d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.487889+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x12a03d6/0x1375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.410496712s of 10.063361168s, submitted: 153
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.488041+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 12115968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.488212+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049980 data_alloc: 218103808 data_used: 233472
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.488416+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.488647+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12025856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.488805+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 11812864 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.488985+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 10698752 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa6c4000/0x0/0x4ffc00000, data 0x12e34ad/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.489248+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 11739136 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062908 data_alloc: 218103808 data_used: 241664
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.489386+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 12328960 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0x130b8fd/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.489541+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.489750+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.489893+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.388790131s of 10.036962509s, submitted: 152
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 12066816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.490575+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 11968512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067180 data_alloc: 218103808 data_used: 249856
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.491893+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 11952128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.492468+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa65e000/0x0/0x4ffc00000, data 0x1348b92/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.493132+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.494013+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 10993664 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.494758+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa647000/0x0/0x4ffc00000, data 0x1360b19/0x1437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069652 data_alloc: 218103808 data_used: 258048
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.495060+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.495398+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 10616832 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.495529+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.495697+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.325051308s of 10.040717125s, submitted: 60
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.495851+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075246 data_alloc: 218103808 data_used: 266240
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.496173+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa629000/0x0/0x4ffc00000, data 0x137b131/0x1454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.496343+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.496574+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 10461184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.496885+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.497175+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079110 data_alloc: 218103808 data_used: 266240
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.497310+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.497467+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 10428416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.497683+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 10395648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.497938+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.498156+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ed000/0x0/0x4ffc00000, data 0x13b6889/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.324265480s of 11.515779495s, submitted: 35
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077896 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.498376+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.498697+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.498870+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 10321920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ee000/0x0/0x4ffc00000, data 0x13b67ee/0x1490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.499097+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.499364+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086088 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.499634+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.499940+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9232384 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa59e000/0x0/0x4ffc00000, data 0x1404cc1/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.500111+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9199616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.500322+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 9625600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.500573+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093304 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.500716+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734145164s of 11.004765511s, submitted: 52
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.500953+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa56b000/0x0/0x4ffc00000, data 0x1438f3c/0x1513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.501153+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.501306+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.504416+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x147084c/0x154b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095534 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.504608+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.504812+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.504969+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9150464 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.505157+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.505371+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x14983d4/0x1572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095530 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.505527+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.505677+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.397135735s of 11.280517578s, submitted: 59
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.505871+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7593984 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.506005+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 7479296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x14d321d/0x15ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.506209+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098050 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.506343+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.506509+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 7462912 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.506775+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.506934+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.507083+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099392 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.507367+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.507540+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.507735+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.507914+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.508095+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.184447289s of 12.378032684s, submitted: 30
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098094 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.508264+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.508420+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.508590+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.508762+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.508913+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098462 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.509078+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65800
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.509250+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 12
Nov 22 06:01:46 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.509414+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.509550+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.509775+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.734287262s of 10.033769608s, submitted: 19
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4cd000/0x0/0x4ffc00000, data 0x14d592e/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099814 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.510030+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.510414+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.510842+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.511054+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.511289+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.511510+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101084 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.511821+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.512009+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.512213+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.512394+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.512573+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100090 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359554291s of 11.549299240s, submitted: 18
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.512729+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 7864320 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.512877+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.513020+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.513213+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.513359+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.513544+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.513805+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.514459+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.514672+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d5816/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.514916+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.515138+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.515464+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.601215363s of 11.686765671s, submitted: 14
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.515669+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.515845+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.516030+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100378 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.516245+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.516417+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.516586+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.516761+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.516906+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101794 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d58b5/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.517040+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.517210+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.517372+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926100731s of 11.090178490s, submitted: 16
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.517533+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.517690+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101618 data_alloc: 218103808 data_used: 274432
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 7979008 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.517850+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.518027+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.518195+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.518445+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.518613+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:46 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:46 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106954 data_alloc: 218103808 data_used: 282624
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d742f/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.518759+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.518900+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:46 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.519056+0000)
Nov 22 06:01:46 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.519285+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x14d7530/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.746568680s of 10.998859406s, submitted: 51
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.519445+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108006 data_alloc: 218103808 data_used: 282624
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.519532+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.519752+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.519928+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.520092+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.520178+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115008 data_alloc: 218103808 data_used: 290816
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.520321+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.520518+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.520650+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14daac0/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.520757+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc543/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.520945+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119046 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848780632s of 10.849118233s, submitted: 70
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.521093+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.521219+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.521440+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.521678+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc608/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.521843+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119676 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.522014+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc541/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.522153+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.522329+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.522559+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.522715+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119372 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.522887+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.558925629s of 10.797169685s, submitted: 23
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.523007+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.523123+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.523307+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.523536+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120626 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.523708+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.523884+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.524065+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 7766016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.524228+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 ms_handle_reset con 0x55c27dd65800 session 0x55c27d401c20
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14dc44b/0x15bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 7143424 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.524376+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121304 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 13
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.524525+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.524677+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979153633s of 11.168646812s, submitted: 206
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.524834+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.524996+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.525190+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120872 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.525318+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.525546+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.525696+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.525852+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc5e2/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.525964+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122944 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc5ad/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.526106+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.526244+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.526389+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.424748421s of 10.796654701s, submitted: 27
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 7069696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.526517+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.526636+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123366 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc5e1/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.526788+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.526945+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 6905856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.527116+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.527300+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1508cc2/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.527550+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130868 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 5865472 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.527724+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90677248 unmapped: 5750784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.527909+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 5332992 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.528063+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa02d000/0x0/0x4ffc00000, data 0x155d2e5/0x163f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.528279+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.726997375s of 10.987854004s, submitted: 60
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.528451+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142070 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 4874240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.528685+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 4759552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.528843+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9ff1000/0x0/0x4ffc00000, data 0x159c2c8/0x167d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.529015+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.529168+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.529359+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143868 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.529526+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 3416064 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.529673+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 3145728 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.529829+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.530031+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.212817192s of 10.548931122s, submitted: 84
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.530164+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151538 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 3604480 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.530344+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 3596288 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.530547+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 3563520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.530742+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.530887+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.531015+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158170 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2383872 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.531157+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2220032 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9eb9000/0x0/0x4ffc00000, data 0x16d2996/0x17b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.531336+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 2670592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.531529+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93806592 unmapped: 2621440 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.531679+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x1705e04/0x17e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.531826+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154906 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.826278687s of 11.162016869s, submitted: 70
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e79000/0x0/0x4ffc00000, data 0x1714710/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.531999+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93995008 unmapped: 2433024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.532174+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e56000/0x0/0x4ffc00000, data 0x1736fbe/0x1817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2236416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.532333+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2228224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.532593+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e42000/0x0/0x4ffc00000, data 0x174b9d7/0x182c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1179648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.532745+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161486 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.532913+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.533459+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.533702+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e15000/0x0/0x4ffc00000, data 0x17766f9/0x1858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 1916928 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x17933f1/0x1874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.534642+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 1867776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.534945+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170682 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 2760704 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.535300+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.836094856s of 10.913021088s, submitted: 68
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.536094+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.536285+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d8f000/0x0/0x4ffc00000, data 0x17fcccc/0x18de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95117312 unmapped: 2359296 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.536920+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1433600 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.537453+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172456 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.538018+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:16.538345+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d5a000/0x0/0x4ffc00000, data 0x1832e2f/0x1913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.538537+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.538686+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.538894+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180160 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 1990656 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.539082+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 2457600 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.539278+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.346602440s of 10.624962807s, submitted: 67
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96133120 unmapped: 2392064 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.539518+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cf2000/0x0/0x4ffc00000, data 0x189b875/0x197c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.539673+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s
                                           Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.539839+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179516 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 2260992 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.540031+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cc2000/0x0/0x4ffc00000, data 0x18ca961/0x19ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.540223+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.540369+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.540566+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 2039808 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.540749+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.540911+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27c775400
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: get_auth_request con 0x55c27dd65800 auth_method 0
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.541055+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.541208+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.541388+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.541568+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.541701+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670079231s of 13.884990692s, submitted: 22
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.541845+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.541981+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.542148+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.542287+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177262 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.542519+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.542681+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.542830+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.543039+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.543249+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.543442+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.543612+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.543777+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.543953+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.544090+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.544264+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.544428+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.544593+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.545035+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.545307+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.545583+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.545739+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.552337646s of 21.707801819s, submitted: 8
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.545901+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.546075+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.546324+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176268 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.546553+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.546810+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.546973+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.547158+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.547360+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177860 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.547536+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.547755+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.547944+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.548163+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.837540627s of 11.850649834s, submitted: 3
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.548349+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179404 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.548559+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.548733+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.548900+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.549096+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.549355+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177186 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.549523+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.549696+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.549862+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.550061+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.550214+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178762 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.550375+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.550584+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.027555466s of 13.073743820s, submitted: 11
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.550794+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.550967+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.551135+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.551348+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.551530+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.551721+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.551904+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.552056+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179664 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.552258+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.552533+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.552710+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.837194443s of 10.898418427s, submitted: 7
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.552821+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 827392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.552968+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 901120 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184054 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.553114+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.553333+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc750/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.553507+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.553699+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.553870+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185166 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.554040+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 1933312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.554230+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.554415+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.554591+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.554714+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.652610779s of 11.716604233s, submitted: 16
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185554 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.554895+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 1867776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.555123+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.555285+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.555572+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.555739+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186584 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.555874+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.556005+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.556168+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.556330+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b3/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.556562+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185590 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.556719+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.556911+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.021935463s of 12.413156509s, submitted: 105
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.557113+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.557395+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.557654+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186668 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.557897+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.558071+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.558232+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.558435+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.558621+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185978 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.558761+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.558945+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.559097+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.559358+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.559588+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185128 data_alloc: 218103808 data_used: 299008
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.559752+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.800412178s of 13.922379494s, submitted: 10
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.560021+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.560168+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.560350+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.560540+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189286 data_alloc: 218103808 data_used: 307200
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.560819+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.561049+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 843776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.561211+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.561397+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.561589+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189462 data_alloc: 218103808 data_used: 307200
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.561762+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.561896+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.562038+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.972100258s of 12.103911400s, submitted: 26
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.562213+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.562403+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192404 data_alloc: 218103808 data_used: 315392
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.562584+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.562740+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 14
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65000
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.562895+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfaff/0x19c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.563077+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc11/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.563240+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 761856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193468 data_alloc: 218103808 data_used: 315392
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.563394+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.563567+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.563742+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.563904+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.564083+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193644 data_alloc: 218103808 data_used: 315392
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.564213+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.880873680s of 12.922811508s, submitted: 19
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.564412+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.564584+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.564807+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18dfcd0/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.564974+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198760 data_alloc: 218103808 data_used: 315392
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.565111+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.565272+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.565424+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc35/0x19c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.565587+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.565788+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.565940+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197700 data_alloc: 218103808 data_used: 323584
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.566202+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.758671761s of 10.983880043s, submitted: 61
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.566394+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.566687+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.566876+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.567005+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197876 data_alloc: 218103808 data_used: 323584
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.567164+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.567324+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.567591+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.567759+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.567942+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201698 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.568117+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.568254+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.046489716s of 11.064700127s, submitted: 14
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.568409+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.568551+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.568683+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202762 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.568809+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.569112+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.569772+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.569920+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.570070+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202586 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.570227+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.570668+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.570850+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.863492966s of 10.986426353s, submitted: 33
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.571000+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.571146+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18e4d7e/0x19ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206958 data_alloc: 218103808 data_used: 339968
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1810432 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.571284+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.571536+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.571743+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.571921+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.572186+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214634 data_alloc: 218103808 data_used: 352256
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.572356+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.572503+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.572666+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.573063+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.573411+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e8559/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214978 data_alloc: 218103808 data_used: 352256
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.761722565s of 12.017519951s, submitted: 41
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.573691+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 1761280 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.573908+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 1744896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.574052+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.575121+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.575399+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223380 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.575642+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.575826+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c95000/0x0/0x4ffc00000, data 0x18ebca8/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.576344+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.576513+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.576734+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220608 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.576945+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.814929008s of 10.932528496s, submitted: 43
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.577165+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c98000/0x0/0x4ffc00000, data 0x18ebaf8/0x19d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.577402+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.577614+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.577806+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224766 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.578030+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.578234+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.578431+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.578571+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.578708+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226534 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.578875+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.579050+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.579211+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.579380+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.052343369s of 13.091160774s, submitted: 15
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.579600+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225494 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.579765+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.580064+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.580307+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.580500+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.580659+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c94000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226380 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.580855+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.581033+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.581379+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.581530+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.581660+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228808 data_alloc: 218103808 data_used: 376832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.581822+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.581982+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.582175+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.582321+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.807965279s of 14.139179230s, submitted: 58
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.582539+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229000 data_alloc: 218103808 data_used: 376832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.582716+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.582885+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.583062+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.583198+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 161 heartbeat osd_stat(store_statfs(0x4f9c8a000/0x0/0x4ffc00000, data 0x18f27c6/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.583354+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236804 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.583529+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.583666+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100155392 unmapped: 1515520 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.583803+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.583942+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.584086+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239106 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.630488396s of 11.893076897s, submitted: 66
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.584209+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.584331+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.584453+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.584595+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.584724+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242096 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 163 heartbeat osd_stat(store_statfs(0x4f9c85000/0x0/0x4ffc00000, data 0x18f5e0f/0x19e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.584856+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.584974+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.585130+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.585296+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.585520+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245550 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.585676+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.585848+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.243680000s of 11.435800552s, submitted: 63
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.585988+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c81000/0x0/0x4ffc00000, data 0x18f7ac0/0x19ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.586130+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.586294+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249940 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.586449+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100229120 unmapped: 1441792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.586557+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.586705+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.586861+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 166 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x18fb08e/0x19f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.587037+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255022 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.587172+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x18fcca4/0x19f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.587287+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.587417+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.587608+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.215492249s of 12.445914268s, submitted: 69
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.587755+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258316 data_alloc: 218103808 data_used: 401408
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.588313+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.588414+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.588645+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.588822+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.589003+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258492 data_alloc: 218103808 data_used: 401408
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.589150+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.589340+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100286464 unmapped: 1384448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.589565+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 169 ms_handle_reset con 0x55c27dd65000 session 0x55c27f3a21e0
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 1048576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.589705+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 15
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.589863+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261466 data_alloc: 218103808 data_used: 401408
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.590039+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.873164177s of 11.996927261s, submitted: 252
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.590218+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.590400+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.590598+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.590754+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260762 data_alloc: 218103808 data_used: 401408
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.590895+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.591083+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9864000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.591252+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.591398+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.591624+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.591858+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.592035+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.592266+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.592445+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.592532+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.592714+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.592908+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.593067+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.593255+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.593469+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.593639+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.593793+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.594029+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.594232+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.594435+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.594586+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.595046+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.595253+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.595398+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.595553+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.595723+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.595900+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.596071+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.596214+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.596397+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.596651+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.596816+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.597010+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.597173+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.597399+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.597561+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.597795+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.599785+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.601301+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.602303+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.602648+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.603077+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.603836+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.604330+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.604581+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.605444+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.605548+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.606093+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.606350+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.606867+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.607145+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.607311+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.607868+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.363555908s of 56.390811920s, submitted: 15
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 ms_handle_reset con 0x55c27dd65400 session 0x55c27c84cd20
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.608086+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 16
Nov 22 06:01:47 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.608395+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.608573+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.608741+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.608888+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.609031+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.609154+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.609279+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.609448+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.609604+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.609766+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.609910+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.610075+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.610228+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.610392+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.610549+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.610750+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.611039+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.611551+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.611987+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.612389+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.612522+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.613002+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.613207+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.613398+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.613569+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.613691+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.613827+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.613961+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.614083+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 638976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.614209+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1933312 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.614332+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2179072 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:47 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:47 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:01:47 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.614529+0000)
Nov 22 06:01:47 compute-0 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:01:47 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:01:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:01:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:01:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 06:01:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2885789032' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14579 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/956909896' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4220139389' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: pgmap v1271: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1543863777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2885789032' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 06:01:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2937092397' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:01:47 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14583 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 06:01:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316180778' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 06:01:48 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451420270' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: from='client.14579 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2937092397' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: from='client.14583 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1316180778' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:01:48 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:49 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14593 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:01:49 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:01:49.072+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:01:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 06:01:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1459366565' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 06:01:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366959193' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 06:01:49 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/763088824' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: from='client.14587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/451420270' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: pgmap v1272: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:49 compute-0 ceph-mon[75840]: from='client.14593 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1459366565' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 06:01:49 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/366959193' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 06:01:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772189802' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 06:01:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109510212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 06:01:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580993086' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 06:01:50 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158254947' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/763088824' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1772189802' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4109510212' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1580993086' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/158254947' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 06:01:50 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097651303' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 06:01:51 compute-0 sudo[276710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:51 compute-0 sudo[276710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:51 compute-0 sudo[276710]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/682878500' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 06:01:51 compute-0 sudo[276746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:01:51 compute-0 sudo[276746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:51 compute-0 sudo[276746]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:51 compute-0 crontab[276815]: (root) LIST (root)
Nov 22 06:01:51 compute-0 sudo[276781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:51 compute-0 sudo[276781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:51 compute-0 sudo[276781]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:51 compute-0 sudo[276830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:01:51 compute-0 sudo[276830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013787005' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3152734359' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 06:01:51 compute-0 sudo[276830]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:26.513660+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 2932736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:27.513863+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 2924544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 843165 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:28.514028+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:29:57.577630+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:29:57.591767+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 129) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:29:57.577630+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:29:57.591767+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 2924544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:29.514243+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:29:58.601439+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:29:58.615710+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 131) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:29:58.601439+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:29:58.615710+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:30.514464+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.113706589s of 10.150311470s, submitted: 10
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:31.514660+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:00.632516+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.c scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:00.646578+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.c scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 133) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:00.632516+0000 osd.1 (osd.1) 132 : cluster [DBG] 5.c scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:00.646578+0000 osd.1 (osd.1) 133 : cluster [DBG] 5.c scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 2916352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:32.514862+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 2908160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846608 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:33.514991+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:02.565055+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.18 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:02.579174+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.18 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 135) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:02.565055+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.18 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:02.579174+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.18 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 2908160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:34.515186+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:35.515304+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:36.515541+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 2891776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:37.515689+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 2883584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846608 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:38.515797+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 2883584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:39.515930+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 2875392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:40.516136+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:09.557928+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.1 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:09.572050+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.1 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 137) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:09.557928+0000 osd.1 (osd.1) 136 : cluster [DBG] 8.1 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:09.572050+0000 osd.1 (osd.1) 137 : cluster [DBG] 8.1 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 2875392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:41.516716+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:10.533130+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.3 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:10.550832+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.3 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.896636963s of 10.933871269s, submitted: 8
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 139) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:10.533130+0000 osd.1 (osd.1) 138 : cluster [DBG] 8.3 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:10.550832+0000 osd.1 (osd.1) 139 : cluster [DBG] 8.3 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 2867200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:42.516941+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:11.566439+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.5 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:11.580404+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.5 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 141) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:11.566439+0000 osd.1 (osd.1) 140 : cluster [DBG] 8.5 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:11.580404+0000 osd.1 (osd.1) 141 : cluster [DBG] 8.5 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 2859008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850049 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:43.517173+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 2850816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:44.517341+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 2850816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:45.517463+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:46.517647+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:47.517760+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 2842624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850049 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:48.517876+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 2834432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:49.518146+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 2834432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:50.518316+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 2826240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:51.518544+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:21.461391+0000 osd.1 (osd.1) 142 : cluster [DBG] 8.7 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:21.475419+0000 osd.1 (osd.1) 143 : cluster [DBG] 8.7 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 143) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:21.461391+0000 osd.1 (osd.1) 142 : cluster [DBG] 8.7 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:21.475419+0000 osd.1 (osd.1) 143 : cluster [DBG] 8.7 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 2826240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:52.518760+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851196 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.920650482s of 11.937989235s, submitted: 4
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:53.518944+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:23.504404+0000 osd.1 (osd.1) 144 : cluster [DBG] 8.8 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:23.518506+0000 osd.1 (osd.1) 145 : cluster [DBG] 8.8 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 145) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:23.504404+0000 osd.1 (osd.1) 144 : cluster [DBG] 8.8 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:23.518506+0000 osd.1 (osd.1) 145 : cluster [DBG] 8.8 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:54.519179+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 2818048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:55.519338+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 2809856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:56.519504+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 2809856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:57.519643+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 2801664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852343 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:58.519786+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 2801664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:59.519930+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:00.520089+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:01.520343+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 2793472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:02.520487+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:32.265831+0000 osd.1 (osd.1) 146 : cluster [DBG] 8.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:32.279945+0000 osd.1 (osd.1) 147 : cluster [DBG] 8.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 147) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:32.265831+0000 osd.1 (osd.1) 146 : cluster [DBG] 8.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:32.279945+0000 osd.1 (osd.1) 147 : cluster [DBG] 8.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 2785280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853490 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:03.520740+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 2785280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:04.520907+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.758671761s of 11.776473999s, submitted: 4
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 2777088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:05.521087+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:35.280804+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:35.294909+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 149) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:35.280804+0000 osd.1 (osd.1) 148 : cluster [DBG] 8.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:35.294909+0000 osd.1 (osd.1) 149 : cluster [DBG] 8.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 2777088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:06.521299+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:07.521422+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 854638 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:08.521662+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 2768896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:09.521857+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:39.315538+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:39.329569+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 151) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:39.315538+0000 osd.1 (osd.1) 150 : cluster [DBG] 8.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:39.329569+0000 osd.1 (osd.1) 151 : cluster [DBG] 8.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 2752512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:10.522068+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 2752512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:11.522259+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:41.340225+0000 osd.1 (osd.1) 152 : cluster [DBG] 8.17 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:41.354320+0000 osd.1 (osd.1) 153 : cluster [DBG] 8.17 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 153) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:41.340225+0000 osd.1 (osd.1) 152 : cluster [DBG] 8.17 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:41.354320+0000 osd.1 (osd.1) 153 : cluster [DBG] 8.17 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:12.522466+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856934 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:13.522624+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 2744320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:14.522733+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.980106354s of 10.004346848s, submitted: 6
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 2736128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:15.522913+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:45.285322+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:45.299578+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 155) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:45.285322+0000 osd.1 (osd.1) 154 : cluster [DBG] 8.19 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:45.299578+0000 osd.1 (osd.1) 155 : cluster [DBG] 8.19 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 2736128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:16.523102+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:17.523253+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858082 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:18.523433+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 2727936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:19.523551+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:49.244328+0000 osd.1 (osd.1) 156 : cluster [DBG] 8.1e scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:49.258427+0000 osd.1 (osd.1) 157 : cluster [DBG] 8.1e scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 157) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:49.244328+0000 osd.1 (osd.1) 156 : cluster [DBG] 8.1e scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:49.258427+0000 osd.1 (osd.1) 157 : cluster [DBG] 8.1e scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:20.523763+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:50.277457+0000 osd.1 (osd.1) 158 : cluster [DBG] 9.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:50.312756+0000 osd.1 (osd.1) 159 : cluster [DBG] 9.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 159) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:50.277457+0000 osd.1 (osd.1) 158 : cluster [DBG] 9.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:50.312756+0000 osd.1 (osd.1) 159 : cluster [DBG] 9.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:21.524031+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:22.524190+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860377 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:23.524381+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 2711552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:24.524559+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:54.388866+0000 osd.1 (osd.1) 160 : cluster [DBG] 9.4 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:54.441789+0000 osd.1 (osd.1) 161 : cluster [DBG] 9.4 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 161) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:54.388866+0000 osd.1 (osd.1) 160 : cluster [DBG] 9.4 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:54.441789+0000 osd.1 (osd.1) 161 : cluster [DBG] 9.4 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 2711552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:25.525211+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.136055946s of 11.168321609s, submitted: 8
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 2703360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:26.526969+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:56.453671+0000 osd.1 (osd.1) 162 : cluster [DBG] 9.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:56.503119+0000 osd.1 (osd.1) 163 : cluster [DBG] 9.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 163) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:56.453671+0000 osd.1 (osd.1) 162 : cluster [DBG] 9.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:56.503119+0000 osd.1 (osd.1) 163 : cluster [DBG] 9.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:27.527885+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:57.440960+0000 osd.1 (osd.1) 164 : cluster [DBG] 9.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:57.462158+0000 osd.1 (osd.1) 165 : cluster [DBG] 9.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 2703360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 165) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:57.440960+0000 osd.1 (osd.1) 164 : cluster [DBG] 9.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:57.462158+0000 osd.1 (osd.1) 165 : cluster [DBG] 9.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:28.528158+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863819 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:29.529240+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:59.478050+0000 osd.1 (osd.1) 166 : cluster [DBG] 9.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:30:59.506302+0000 osd.1 (osd.1) 167 : cluster [DBG] 9.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 167) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:59.478050+0000 osd.1 (osd.1) 166 : cluster [DBG] 9.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:30:59.506302+0000 osd.1 (osd.1) 167 : cluster [DBG] 9.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:30.529438+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 2695168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:31.531033+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 2686976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:32.531154+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:01.535898+0000 osd.1 (osd.1) 168 : cluster [DBG] 9.14 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:01.574770+0000 osd.1 (osd.1) 169 : cluster [DBG] 9.14 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 2686976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 169) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:01.535898+0000 osd.1 (osd.1) 168 : cluster [DBG] 9.14 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:01.574770+0000 osd.1 (osd.1) 169 : cluster [DBG] 9.14 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:33.531336+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 1 last_log 170 sent 169 num 1 unsent 1 sending 1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:03.528715+0000 osd.1 (osd.1) 170 : cluster [DBG] 9.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 2670592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867263 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 170) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:03.528715+0000 osd.1 (osd.1) 170 : cluster [DBG] 9.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:34.531578+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 1 last_log 171 sent 170 num 1 unsent 1 sending 1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:03.563991+0000 osd.1 (osd.1) 171 : cluster [DBG] 9.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 2670592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 171) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:03.563991+0000 osd.1 (osd.1) 171 : cluster [DBG] 9.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:35.531886+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:04.543272+0000 osd.1 (osd.1) 172 : cluster [DBG] 11.5 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:04.557418+0000 osd.1 (osd.1) 173 : cluster [DBG] 11.5 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 2662400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 173) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:04.543272+0000 osd.1 (osd.1) 172 : cluster [DBG] 11.5 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:04.557418+0000 osd.1 (osd.1) 173 : cluster [DBG] 11.5 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:36.532134+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 2654208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:37.532261+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 2654208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:38.532510+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868411 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:39.532656+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:40.532869+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 2646016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:41.533051+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 2637824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:42.533223+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 2637824 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.133054733s of 16.182121277s, submitted: 12
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:43.533392+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:12.634866+0000 osd.1 (osd.1) 174 : cluster [DBG] 11.7 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:12.652419+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.7 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 2629632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869559 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 175) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:12.634866+0000 osd.1 (osd.1) 174 : cluster [DBG] 11.7 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:12.652419+0000 osd.1 (osd.1) 175 : cluster [DBG] 11.7 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:44.533627+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:13.597807+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:13.612006+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 2629632 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 177) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:13.597807+0000 osd.1 (osd.1) 176 : cluster [DBG] 11.a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:13.612006+0000 osd.1 (osd.1) 177 : cluster [DBG] 11.a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:45.533883+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:14.577267+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.c deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:14.591239+0000 osd.1 (osd.1) 179 : cluster [DBG] 11.c deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 2613248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 179) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:14.577267+0000 osd.1 (osd.1) 178 : cluster [DBG] 11.c deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:14.591239+0000 osd.1 (osd.1) 179 : cluster [DBG] 11.c deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:46.534059+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 2613248 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:47.534615+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 2605056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:48.534785+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 2605056 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871855 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:49.534943+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:19.511183+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:19.525347+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 181) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:19.511183+0000 osd.1 (osd.1) 180 : cluster [DBG] 11.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:19.525347+0000 osd.1 (osd.1) 181 : cluster [DBG] 11.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 2596864 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:50.535156+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:20.487338+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:20.501489+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 183) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:20.487338+0000 osd.1 (osd.1) 182 : cluster [DBG] 11.16 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:20.501489+0000 osd.1 (osd.1) 183 : cluster [DBG] 11.16 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 2580480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:51.535378+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 2580480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:52.535557+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 2572288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.748580933s of 10.789328575s, submitted: 10
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:53.535701+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:23.425236+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.1d scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:23.439284+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.1d scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 185) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:23.425236+0000 osd.1 (osd.1) 184 : cluster [DBG] 11.1d scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:23.439284+0000 osd.1 (osd.1) 185 : cluster [DBG] 11.1d scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 2572288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875302 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:54.535916+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:55.536100+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:56.536265+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 2564096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:57.536382+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 2555904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:58.536588+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:28.402927+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:28.417296+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 2555904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876451 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 187) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:28.402927+0000 osd.1 (osd.1) 186 : cluster [DBG] 10.13 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:28.417296+0000 osd.1 (osd.1) 187 : cluster [DBG] 10.13 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:59.536799+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 2547712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:00.536893+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:30.481449+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:30.495667+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 2539520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 189) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:30.481449+0000 osd.1 (osd.1) 188 : cluster [DBG] 10.10 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:30.495667+0000 osd.1 (osd.1) 189 : cluster [DBG] 10.10 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:01.537105+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 2531328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:02.537258+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 2531328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.051478386s of 10.075274467s, submitted: 6
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:03.537573+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:33.500630+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.6 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:33.514646+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.6 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878748 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 191) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:33.500630+0000 osd.1 (osd.1) 190 : cluster [DBG] 10.6 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:33.514646+0000 osd.1 (osd.1) 191 : cluster [DBG] 10.6 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:04.537807+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:05.537902+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:06.538044+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:36.421412+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:36.435506+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 2523136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 193) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:36.421412+0000 osd.1 (osd.1) 192 : cluster [DBG] 10.2 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:36.435506+0000 osd.1 (osd.1) 193 : cluster [DBG] 10.2 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:07.538266+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 2514944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:08.538467+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 2514944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879896 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:09.538728+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 2506752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:10.538911+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 2506752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:11.539119+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 2498560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:12.539304+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 2498560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:13.539552+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 2490368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879896 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:14.539704+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 2490368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:15.539883+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.036729813s of 12.049832344s, submitted: 4
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 2482176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:16.540074+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:45.550304+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.b scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:45.564446+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.b scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 2465792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 195) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:45.550304+0000 osd.1 (osd.1) 194 : cluster [DBG] 10.b scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:45.564446+0000 osd.1 (osd.1) 195 : cluster [DBG] 10.b scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:17.540287+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74022912 unmapped: 2465792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:18.540430+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 2457600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881044 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:19.540560+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 2457600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:20.540695+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:49.633644+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.f deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:49.647699+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.f deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.19 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 197) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:49.633644+0000 osd.1 (osd.1) 196 : cluster [DBG] 10.f deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:49.647699+0000 osd.1 (osd.1) 197 : cluster [DBG] 10.f deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:21.540871+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:50.652432+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.19 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:50.666510+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.19 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 199) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:50.652432+0000 osd.1 (osd.1) 198 : cluster [DBG] 10.19 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:50.666510+0000 osd.1 (osd.1) 199 : cluster [DBG] 10.19 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:22.541065+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 2449408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:23.541202+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 2441216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884490 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:24.541371+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:53.634574+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:53.648670+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 2433024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 201) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:53.634574+0000 osd.1 (osd.1) 200 : cluster [DBG] 10.12 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:53.648670+0000 osd.1 (osd.1) 201 : cluster [DBG] 10.12 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:25.541582+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:54.679925+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.11 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:54.694022+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.11 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74055680 unmapped: 2433024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 203) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:54.679925+0000 osd.1 (osd.1) 202 : cluster [DBG] 10.11 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:54.694022+0000 osd.1 (osd.1) 203 : cluster [DBG] 10.11 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:26.541783+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:27.541995+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:28.542126+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 2424832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:01:51 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885639 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:29.542273+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.083676338s of 14.125527382s, submitted: 10
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 2416640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:30.542429+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:59.675899+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:31:59.690031+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 2416640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 205) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:59.675899+0000 osd.1 (osd.1) 204 : cluster [DBG] 10.1a scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:31:59.690031+0000 osd.1 (osd.1) 205 : cluster [DBG] 10.1a scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:31.542682+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:00.720137+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.14 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:00.737825+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.14 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 207) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:00.720137+0000 osd.1 (osd.1) 206 : cluster [DBG] 10.14 deep-scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:00.737825+0000 osd.1 (osd.1) 207 : cluster [DBG] 10.14 deep-scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:32.542873+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:01.672211+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.15 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:01.704130+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.15 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 209) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:01.672211+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.15 scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:01.704130+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.15 scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:33.543193+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:02.710955+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.1f scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  will send 2025-11-22T05:32:02.746353+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.1f scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 2408448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client handle_log_ack log(last 211) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:02.710955+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.1f scrub starts
Nov 22 06:01:51 compute-0 ceph-osd[90784]: log_client  logged 2025-11-22T05:32:02.746353+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.1f scrub ok
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:34.543406+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 2400256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:35.543677+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 2400256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:36.543827+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 2392064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:37.544010+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 2392064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:38.544265+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:39.544425+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:40.544587+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:41.544787+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:42.544982+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 2383872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:43.545135+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 2375680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:44.545267+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 2375680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:45.545448+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 2367488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:46.545632+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 2359296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:47.545791+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 2359296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:48.545931+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:49.546096+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:50.546264+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 2351104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:51.546509+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 2342912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:52.546645+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74145792 unmapped: 2342912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:53.546784+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 2334720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:54.546992+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 2334720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:55.547190+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:56.547319+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:57.547535+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 2326528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:58.547700+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:59.547875+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:00.548007+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 2318336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:01.548181+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 2310144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:02.548368+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 2310144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:03.548516+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 2301952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:04.548677+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74186752 unmapped: 2301952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:05.548832+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:06.549008+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:07.549198+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:08.549333+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:09.549509+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:10.549644+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:11.549848+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:12.550008+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 2293760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:13.550157+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:14.550307+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 2285568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:15.550466+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 2277376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:16.550630+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 2277376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:17.550805+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:18.550964+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:19.551140+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 2269184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:20.551291+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 2260992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:21.551524+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 2260992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:22.551678+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 2252800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:23.551797+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 2252800 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:24.551940+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:25.552080+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:26.552193+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 2244608 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:27.552341+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 2236416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:28.552456+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 2236416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:29.552634+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 2228224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:30.552783+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 2220032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:31.553029+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 2220032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:32.553170+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 2211840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:33.553364+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 2211840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:34.553562+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 2203648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:35.553720+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 2203648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:36.553922+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:37.554150+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:38.554310+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 2195456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:39.554467+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 2187264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:40.554736+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 2187264 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:41.555000+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 2179072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:42.555172+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 2179072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:43.555313+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 2170880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:44.555455+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 2170880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:45.555681+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 2162688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:46.555858+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 2154496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:47.556842+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 2154496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:48.556955+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:49.557122+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:50.557266+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 2146304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:51.557442+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 2138112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:52.557561+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 2138112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:53.557695+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 2129920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:54.557843+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 2129920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:55.557989+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 2121728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:56.558119+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 2121728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:57.558345+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:58.558504+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:59.558646+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 2113536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:00.558798+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:01.558964+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:02.559131+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 2105344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:03.559287+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 2097152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:04.559448+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 2097152 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:05.559598+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 2088960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:06.559741+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 2088960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:07.559912+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 2080768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:08.560038+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 2080768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:09.560224+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 2072576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:10.560352+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74416128 unmapped: 2072576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:11.560524+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:12.560668+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:13.560848+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 2064384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:14.561087+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 2056192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:15.561294+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 2048000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:16.561453+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:17.561691+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:18.561830+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 2039808 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:19.562013+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 2031616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:20.562210+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 2031616 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:21.562411+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 2023424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:22.562559+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 2023424 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:23.562779+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:24.562920+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:25.563065+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 2015232 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:26.563206+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 2007040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:27.563404+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 2007040 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:28.563565+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:51 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:29.563718+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:30.563902+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 1998848 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:31.564140+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1990656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:32.564328+0000)
Nov 22 06:01:51 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 1990656 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:51 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:33.564502+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:34.564681+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:35.564873+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 1982464 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:36.565027+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1974272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:37.565186+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74514432 unmapped: 1974272 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:38.565355+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:39.565547+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:40.565655+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 1966080 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:41.565814+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1957888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:42.565969+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 1957888 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:43.566252+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1949696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:44.566400+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 1949696 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:45.566533+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:46.566711+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:47.566840+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 1941504 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:48.566998+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 1933312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:49.567133+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 1933312 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:50.567315+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:51.567462+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:52.567568+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 1925120 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:53.567766+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 1916928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:54.567952+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 1916928 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:55.568163+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:56.568350+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:57.568538+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 1908736 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:58.568692+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 1900544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:59.568816+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 1900544 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:00.568989+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 1892352 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:01.569159+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 1884160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:02.569322+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 1884160 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:03.569518+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:04.569663+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:05.569814+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 1875968 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:06.569974+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 1867776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:07.570121+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 1867776 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:08.570237+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:09.570383+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:10.570552+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 1859584 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:11.570721+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 1851392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:12.570870+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 1851392 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:13.571048+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 1843200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:14.571263+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 1843200 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:15.571399+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:16.571571+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:17.571781+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 1835008 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:18.571935+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:19.572076+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 1826816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:20.572195+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 1818624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:21.572375+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 1818624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:22.572527+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 1810432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:23.572646+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 1810432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:24.572868+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 1802240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:25.573014+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:26.573158+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 1794048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:27.573530+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:28.573709+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:29.573921+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 1785856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:30.574148+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 1777664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:31.574387+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 1777664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:32.574579+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:33.574783+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:34.574967+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:35.575119+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:36.575235+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 1769472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:37.575387+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:38.575564+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 1761280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:39.575723+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:40.575950+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:41.576645+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74735616 unmapped: 1753088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:42.577617+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:43.577768+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 1744896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:44.577828+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 1736704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:45.577947+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:46.578051+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 1728512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:47.578189+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:48.578369+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 1720320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:49.578572+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:50.578713+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 1712128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:51.578926+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:52.579075+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 1703936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:53.579227+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:54.579448+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:55.579612+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 1695744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:56.579853+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:57.580079+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 1687552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:58.580286+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:59.580558+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 1679360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:00.580765+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:01.580974+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:02.581095+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 1671168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:03.581297+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:04.581460+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 1662976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:05.581633+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 1654784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:06.581838+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 1654784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:07.582063+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:08.582231+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:09.582416+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 1646592 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:10.582558+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 1638400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:11.582707+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 1638400 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:12.582870+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:13.583039+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:14.583253+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 1630208 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:15.583582+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 1622016 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:16.583842+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6771 writes, 28K keys, 6771 commit groups, 1.0 writes per commit group, ingest: 19.67 MB, 0.03 MB/s
                                           Interval WAL: 6771 writes, 1155 syncs, 5.86 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 1556480 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:17.583999+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:18.584406+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:19.584621+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:20.584912+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 1548288 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:21.585246+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:22.585503+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74948608 unmapped: 1540096 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:23.585690+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:24.585982+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:25.586258+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 1531904 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:26.586396+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:27.586684+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 1523712 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:28.586840+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1515520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:29.587012+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 1515520 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:30.587252+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 1507328 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:31.587557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:32.587738+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 1499136 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:33.587947+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:34.588146+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:35.588317+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 1490944 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:36.588454+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:37.588615+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 1482752 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:38.588870+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:39.589012+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75014144 unmapped: 1474560 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:40.589206+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:41.589368+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75022336 unmapped: 1466368 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:42.589550+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:43.589690+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:44.589844+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 1458176 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:45.589978+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 1449984 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:46.590180+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 1449984 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:47.590378+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:48.590600+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 1441792 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:49.590760+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:50.591042+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:51.591203+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 1433600 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:52.591402+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:53.591575+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 1425408 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:54.591684+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:55.591828+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 1417216 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:56.592096+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:57.592202+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:58.592453+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 1409024 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:59.592585+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 1400832 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:00.592734+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 1392640 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:01.592909+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:02.593062+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:03.593176+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 1384448 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:04.593355+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1376256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:05.593557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 1376256 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:06.593700+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1368064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:07.593877+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 1368064 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:08.594020+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:09.594204+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:10.594456+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 1359872 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:11.594732+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:12.594917+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 1351680 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:13.595089+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:14.595264+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 1343488 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:15.595453+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:16.595674+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:17.595814+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 1335296 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:18.595932+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:19.596079+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 1327104 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:20.596211+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:21.596376+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:22.596546+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 1318912 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:23.596686+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:24.596822+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 1310720 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:25.596962+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:26.597150+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 1302528 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:27.597305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:28.597421+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:29.597569+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75194368 unmapped: 1294336 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:30.597772+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:31.597937+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:32.598084+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 1286144 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:33.598226+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:34.598402+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.598602+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.598734+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.598881+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.599031+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.599157+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.599305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.599504+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.599645+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.599829+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.600002+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 315.175384521s of 315.207031250s, submitted: 8
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.600167+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.600308+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.600442+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.600550+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.600688+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.600851+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.601018+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.601517+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.601820+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.602397+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.602671+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.603012+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.603253+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.603408+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.603660+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.603803+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.603990+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.604181+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.604428+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.604566+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.604691+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.605023+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.605172+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.605325+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.605455+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.605621+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.605763+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.605871+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.606018+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.606182+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.606345+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.606592+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.606780+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.607019+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.607221+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.607557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.607736+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.607862+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.607986+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.608123+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.608244+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.608366+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.608502+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.608635+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.608761+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.609006+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.609406+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.609562+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.609763+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.609910+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.610048+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.610213+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.610421+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.610585+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.610690+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.610873+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.611106+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.611277+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.611444+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.611542+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.611694+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.611825+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.611982+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.612222+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.612500+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.612749+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.613025+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.613275+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.613527+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.613759+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.614005+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.614228+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.614376+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.614579+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.614744+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.614979+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.615208+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.615367+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.615541+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.615669+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.615799+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.615941+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.616115+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.616238+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.616412+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.616574+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.616762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.616961+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.617111+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.617259+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.617546+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.617712+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.617848+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.617987+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.618180+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.618339+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.618555+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.618707+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.618858+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.619025+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.619245+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.619433+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.619591+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.619723+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.619899+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.620036+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.620227+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.620399+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.620560+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.620714+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.620852+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.620978+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.621077+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.621212+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.621364+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.621540+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.621737+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.621891+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.622076+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.622209+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.622346+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.622507+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.622642+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.622794+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.623127+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.623271+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.623439+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.623559+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.623696+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.623854+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.624009+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.624129+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.624309+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.624507+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.624688+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.624805+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.625010+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.625236+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.625353+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.625511+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.625656+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.625779+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.625940+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.626112+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.626284+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.626447+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.626755+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.626952+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.627088+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.627256+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.627405+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.627575+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.627731+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.627908+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.628088+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.628222+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.628400+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.628570+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.628710+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.628845+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.629022+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.629161+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.629280+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.629497+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.629611+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.629734+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.629878+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.629998+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.630145+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.630334+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.630556+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.630719+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.630875+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.630997+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.631157+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.631316+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.631500+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.631610+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.631724+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.632454+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.632699+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.632859+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.632986+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.633126+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.633269+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.633418+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.633633+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.633802+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.633968+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.634085+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.634247+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.634399+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.634575+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.634766+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.634917+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.635051+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.635226+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.635374+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.635549+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.635711+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.635879+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.636029+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.636176+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.636328+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.636461+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.636673+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.636832+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.636961+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.637092+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.637210+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.637405+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.637562+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.637745+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.637892+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.638075+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.638215+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.638372+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.638538+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.638650+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.638780+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc ms_handle_reset ms_handle_reset con 0x55e99eb0fc00
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: get_auth_request con 0x55e99ff2bc00 auth_method 0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.639125+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 802816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e99f657800 session 0x55e99f863680
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a038a000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e9a038a400 session 0x55e99ff2c000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657800
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.639247+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.639395+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.639562+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.639709+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.639846+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.640029+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.640144+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.640268+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.640395+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.640522+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.640826+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.641005+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.641124+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.641287+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.641409+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.641963+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.642151+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.642692+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.642843+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.643004+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.643151+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.643305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.643509+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.643646+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.643790+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.643991+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.644109+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.644397+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.644668+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.644846+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.644987+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.645217+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.645385+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.645558+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.645700+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.645849+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.646126+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.646333+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.646493+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.646686+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.646858+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.647053+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.647207+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.647384+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.647586+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.648427+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.648575+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.648702+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.648834+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.648981+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.649112+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.649287+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.649400+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.649566+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.649715+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.649895+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.650061+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.650214+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.650421+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.650586+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.650758+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.651005+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.651169+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.651384+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.651616+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.651915+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.652127+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.652356+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.652568+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.652724+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.653417+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.653588+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.653786+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.653902+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.654055+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.654226+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.654456+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.654637+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.654803+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.661059+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.661234+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.661403+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.661576+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.661686+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.661832+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.662019+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.662234+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.662402+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.662557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.662884+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.663149+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.663317+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.663466+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.663683+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.663869+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.664030+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.664228+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.664360+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.664546+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.664725+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.664879+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.665033+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.665175+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.665330+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.665514+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.665757+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.665900+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.666046+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.666211+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.666326+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.666457+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.666705+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.666880+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.666993+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.667122+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.667283+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.667418+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.667582+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.667707+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.667835+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.668001+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.668175+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.668362+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.668534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.668723+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.668949+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.669097+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.669254+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.669373+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.669554+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.669744+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.669906+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.670072+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.670266+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.670445+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.670641+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.670767+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.670893+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.671048+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.671208+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.671348+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.671541+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.671696+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.671851+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.672158+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.672295+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.672443+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.672654+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.672832+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.672972+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.673111+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.673284+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.673434+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.673574+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.673697+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.673845+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.673985+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.674138+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.674272+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.674439+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.674633+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.674788+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.674951+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.675091+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.675217+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.675370+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.675542+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.675690+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.675882+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.676142+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.676306+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.676518+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.676653+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.676783+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.676904+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.677087+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.677260+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.677396+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.677549+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.677764+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.677894+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.678055+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.678211+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.678389+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.678549+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.678714+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.678866+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.679074+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.679282+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.679460+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.679667+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.679797+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.679974+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.680141+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.680314+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.680548+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.680748+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.680968+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.681147+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.681351+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.681486+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.681660+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.681800+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.681978+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.682228+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.682404+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.682580+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.682787+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.682948+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.683128+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.683303+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.683455+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.683626+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.683830+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.683971+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.684146+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.684339+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.684514+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.684636+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.684761+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.684890+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.685185+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.685339+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.685487+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.685598+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.685728+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.685893+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.686066+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.686164+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.686308+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.686439+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.686551+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.686811+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.687012+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:20.687155+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:21.687291+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.687444+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.687693+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.687848+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.688022+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.688173+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.688366+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.688534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.688703+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.688869+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.689527+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.689700+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.689840+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.689952+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.690179+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.690384+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.690574+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.690743+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.690969+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.691162+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.691369+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.691501+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.691675+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.691838+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.691990+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.692184+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.692336+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.692566+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.692795+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.692964+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.693161+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.693318+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.693465+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.693667+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.693831+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.693920+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.694114+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.694308+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.694507+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.694637+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.694856+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.695040+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.695197+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.695344+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.695539+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.695702+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.695883+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.696322+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.696442+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.696590+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.696781+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.696915+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.697029+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.697147+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.697302+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.697455+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 6951 writes, 28K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6951 writes, 1245 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.697673+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.697822+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.697933+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.698163+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.698378+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.698531+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.698680+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.698882+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.699955+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.700578+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.701008+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.701768+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.702433+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.703055+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.703267+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.703568+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.704024+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.704310+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.704698+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.705003+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.705260+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.705425+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.705612+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.705882+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.706189+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.706409+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.706604+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.706849+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.707037+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.707252+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.707424+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.707647+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.707846+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.708082+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.708374+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.708559+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.708848+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.709059+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.709306+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.709595+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.709857+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.710051+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.710217+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.710391+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.710618+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.710818+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.711004+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.711185+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.711513+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.711699+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.711859+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.712101+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.712312+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.712549+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.712766+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.712955+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.713127+0000)
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952415537' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.713266+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.713419+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.713580+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.713709+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.713858+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.714000+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.714163+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.714364+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.714549+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.714670+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.714802+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.714980+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.715184+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.715346+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.715547+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.715743+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.715915+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.716214+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.716432+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.716589+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.716717+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.716872+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.717016+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.717145+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.717322+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.717559+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.717745+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.717969+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.718179+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.718358+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.718564+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.096008301s of 600.027343750s, submitted: 90
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.718733+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.718917+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.719100+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.719277+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.719461+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.719675+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.719899+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.720090+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.720246+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.720401+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.720601+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.720785+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.720966+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.721099+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.721257+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.721438+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.721695+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.722076+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.722259+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.722420+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.722592+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.722752+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.722865+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.723124+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.723300+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.723463+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.723729+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.723895+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.724084+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.724259+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.724438+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.724640+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-mon[75840]: pgmap v1273: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1097651303' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/682878500' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3013787005' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3152734359' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.724818+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.724985+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.726291+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.726512+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.726663+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.726832+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.726983+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.727116+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.727267+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.727422+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.727535+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.728387+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.729535+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.730676+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.730884+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.731264+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.731404+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.731583+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.732777+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.733561+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.733947+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.734338+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.735143+0000)
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 803382c5-a199-4901-88de-551699bc0c0d does not exist
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 13a3fabe-ceb7-4590-8f48-bc305bc3d352 does not exist
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev db3016cb-6aeb-4186-a0f6-5930a85851a5 does not exist
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.735671+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.736118+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.736353+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.736888+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.737172+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.737621+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.738033+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.738616+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.738829+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.739161+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.739383+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.739610+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.739793+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.740014+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.740188+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.740413+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.740602+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.740835+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.740990+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.741265+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.741507+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.741817+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.741996+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.742158+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.742328+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.742548+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.742695+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.742833+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.743036+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.743209+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.743375+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.743591+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.743754+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.743899+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.744090+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.744263+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.744409+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.744524+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.744732+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.744919+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.745114+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.745343+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.745580+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.745737+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.745939+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.746132+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.746317+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.746598+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.746833+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.747012+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.747208+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.747431+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.747590+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.747776+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.747953+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.748097+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.748233+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.748402+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.748568+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.748721+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.748963+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.749227+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.750091+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.750237+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.750697+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.751182+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.751564+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.751758+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.751953+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.752228+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.752390+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.752553+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.752854+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.753201+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.753533+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.753772+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.753963+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.754166+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.754381+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.754542+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.754726+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.754991+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.755142+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.755298+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.755518+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.755887+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.756061+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.756224+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.756384+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.756512+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.756618+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.756757+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.756914+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.757108+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.757694+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.757829+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.758054+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.758284+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.758608+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.758787+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.758979+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.759224+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.759421+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.759584+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.759747+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.759918+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.760113+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.760324+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.760525+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.760671+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.760817+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.761015+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.761187+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.761383+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.761537+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.761745+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.761941+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.762142+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.762374+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.762553+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.762742+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.762933+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.763098+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.763281+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.763448+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.763583+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.763742+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.763919+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.764071+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.764285+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.764511+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657400
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.764688+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 186.598068237s of 186.921478271s, submitted: 90
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.765887+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896151 data_alloc: 218103808 data_used: 237568
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc5c0000/0x0/0x4ffc00000, data 0x59b775/0x65e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.766361+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 10878976 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 130 ms_handle_reset con 0x55e99f657400 session 0x55e9a24a7e00
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.766592+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 10887168 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a104e400
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.766799+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 18210816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 ms_handle_reset con 0x55e9a104e400 session 0x55e9a24ae3c0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.767227+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.767441+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.767836+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.768100+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.768271+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.768547+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.768731+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.570235252s of 11.802167892s, submitted: 32
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.768907+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.769064+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.810767+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.811147+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.811594+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.812056+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.812459+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.812816+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.813110+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.813388+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.813636+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.813837+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.814055+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.814551+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.814967+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.815794+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.816327+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.816619+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.816869+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.628356934s of 18.760848999s, submitted: 13
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.817219+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 10
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.817571+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.817800+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a24c1c00
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.818147+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.818461+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.818845+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050071 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.819173+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.819971+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 11
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.820190+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 17080320 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.820466+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 17063936 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x15a0a6f/0x1669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.825894356s of 10.000619888s, submitted: 13
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.820650+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054317 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.820871+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.821167+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.821400+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b3000/0x0/0x4ffc00000, data 0x15a0c9e/0x166a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.821626+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.821787+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059907 data_alloc: 218103808 data_used: 262144
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.822046+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.822308+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.822530+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a294e/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.822745+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17039360 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553850174s of 10.000466347s, submitted: 42
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.822944+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057663 data_alloc: 218103808 data_used: 262144
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 16998400 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.823097+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.823286+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.823578+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.823868+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.824095+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062723 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.824521+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.824987+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 16957440 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.825440+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.825717+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791356087s of 10.000161171s, submitted: 30
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.825855+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064529 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.826009+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a46d9/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.826199+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.826372+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.826606+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 16908288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.826738+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063005 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a47a3/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.826903+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.827188+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.827429+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.827750+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.954462051s of 10.000647545s, submitted: 8
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.827922+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066317 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 16834560 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.828127+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a486d/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.828293+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.828507+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.828754+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.829018+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066141 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a489c/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.829391+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.829669+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.829899+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4937/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.830215+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945301056s of 10.004324913s, submitted: 10
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.830409+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067733 data_alloc: 218103808 data_used: 270336
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.830572+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.830837+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.831043+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a8000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.831218+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.831408+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072619 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.831642+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.831903+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.832068+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.832297+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.752876282s of 10.019038200s, submitted: 31
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.832555+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.832773+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.832918+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.833112+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.833273+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a5000/0x0/0x4ffc00000, data 0x15a80b6/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.833592+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 sudo[276994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.833825+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.834034+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.834133+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.834238+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 17022976 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a6000/0x0/0x4ffc00000, data 0x15a811b/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.834403+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429555893s of 10.579680443s, submitted: 16
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075239 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.834609+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.834760+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.834874+0000)
Nov 22 06:01:52 compute-0 sudo[276994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.835043+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.835257+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a7000/0x0/0x4ffc00000, data 0x15a814a/0x1677000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080909 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.835407+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x15a9f79/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 16965632 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.835533+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 16941056 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.835711+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 16916480 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.835931+0000)
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853278840' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 16900096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.836091+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.703340530s of 10.039009094s, submitted: 101
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086341 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 16891904 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 sudo[276994]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb5a0000/0x0/0x4ffc00000, data 0x15ad952/0x167e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.836244+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.836391+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.836583+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.836846+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb59f000/0x0/0x4ffc00000, data 0x15adaee/0x167f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.837001+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089693 data_alloc: 218103808 data_used: 303104
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.837135+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb59b000/0x0/0x4ffc00000, data 0x15af876/0x1682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.837255+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.837411+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.837646+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.837838+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.695872307s of 10.031254768s, submitted: 129
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095739 data_alloc: 218103808 data_used: 311296
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.838611+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb596000/0x0/0x4ffc00000, data 0x15b32a5/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.838807+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.839411+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.839646+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb597000/0x0/0x4ffc00000, data 0x15b336f/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.839958+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094987 data_alloc: 218103808 data_used: 315392
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.840584+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.840874+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.841019+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.841203+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 13574144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x15b4f28/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.841339+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.292222023s of 10.259329796s, submitted: 47
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.841700+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.842008+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.842144+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.842292+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.842534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.842674+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.842797+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.843000+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.843212+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.843422+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb591000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101383 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.843801+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688630104s of 10.714872360s, submitted: 17
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.844037+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.844255+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.844466+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.844667+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.844832+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.845010+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.845183+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.845387+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.845613+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.845877+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.846126+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.969901085s of 11.010137558s, submitted: 5
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.846355+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6bda/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.846520+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.846707+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104871 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.846917+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6ca2/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.847089+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6d07/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.847237+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.847442+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58f000/0x0/0x4ffc00000, data 0x15b6d05/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.847686+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104743 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.847887+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.848061+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6ca4/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.848324+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.848534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.209359169s of 12.344432831s, submitted: 13
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 13410304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.848774+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.848963+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.849148+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.849319+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6d6e/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.849530+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.849689+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103167 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.849859+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6dd3/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.850029+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.850196+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.850422+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896079063s of 10.000349998s, submitted: 7
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.850632+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102975 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.850796+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.850960+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6e9d/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.851173+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.851438+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6f02/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.851588+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106511 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.851750+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a24c1800
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.851949+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 12
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.852090+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b70ae/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.852283+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912652969s of 10.000061989s, submitted: 11
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.852860+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107429 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.853085+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.853359+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.853554+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.853850+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 13238272 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7398/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.854456+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110633 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.854885+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.855093+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b748f/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.855278+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.855528+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.774273872s of 10.000308037s, submitted: 20
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.855715+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.855875+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.856043+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.856357+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b7593/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.856581+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.856913+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.857221+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.857590+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 13148160 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.857820+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.857966+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58b000/0x0/0x4ffc00000, data 0x15b76c0/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.892574310s of 10.000001907s, submitted: 8
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.858156+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112629 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.858289+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.858610+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.858881+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.859106+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7788/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.859240+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.859433+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.859564+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.859730+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.859904+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.934629440s of 10.000991821s, submitted: 10
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.860092+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.860229+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.860389+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.860516+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.860676+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.860773+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114461 data_alloc: 218103808 data_used: 335872
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.860907+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.861043+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7881/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.861201+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.861370+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.757849693s of 10.007835388s, submitted: 14
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.861557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119553 data_alloc: 218103808 data_used: 344064
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.861712+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 13025280 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.861893+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb588000/0x0/0x4ffc00000, data 0x15b9635/0x1695000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 12607488 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb577000/0x0/0x4ffc00000, data 0x15ca73b/0x16a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [1])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.862099+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 11862016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.862275+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 9256960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.862458+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 9355264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126243 data_alloc: 218103808 data_used: 344064
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.862680+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 9289728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f9f70000/0x0/0x4ffc00000, data 0x162369f/0x16fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.862845+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 9240576 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.862971+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.863141+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.811680794s of 10.004056931s, submitted: 91
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.863314+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 8683520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141655 data_alloc: 218103808 data_used: 348160
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.863436+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.863584+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.863715+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.863932+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 8454144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.864075+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 6971392 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e9e000/0x0/0x4ffc00000, data 0x16f345c/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151375 data_alloc: 218103808 data_used: 356352
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.864215+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 6946816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.864371+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.864515+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e7a000/0x0/0x4ffc00000, data 0x1715edb/0x17f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.864666+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 7110656 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.935678482s of 10.000359535s, submitted: 115
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.864821+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 6750208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.864946+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153195 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e2a000/0x0/0x4ffc00000, data 0x176630e/0x1844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.865072+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.865195+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.865313+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.865530+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 6488064 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.865716+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158005 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 6242304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.865868+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9df1000/0x0/0x4ffc00000, data 0x179f4a5/0x187d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90120192 unmapped: 5128192 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dd9000/0x0/0x4ffc00000, data 0x17b7751/0x1895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.866027+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 4988928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.866172+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dae000/0x0/0x4ffc00000, data 0x17e1ddf/0x18c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 5349376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.467321396s of 10.002651215s, submitted: 62
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.866309+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9da1000/0x0/0x4ffc00000, data 0x17f03b4/0x18cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.866434+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156747 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.866554+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.866710+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.866913+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.867076+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90021888 unmapped: 5226496 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a24c1800 session 0x55e99f683a40
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.867182+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172771 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92217344 unmapped: 3031040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9d20000/0x0/0x4ffc00000, data 0x186df18/0x194d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.867284+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 13
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 2916352 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.867424+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2285568 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.867591+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 2211840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.506773949s of 10.000545502s, submitted: 270
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.867706+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1949696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0x18d34ae/0x19b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.867850+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167737 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.868018+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.868187+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 2572288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.868397+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 2326528 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.868550+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94035968 unmapped: 1212416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c66000/0x0/0x4ffc00000, data 0x192a11c/0x1a08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.868702+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174241 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 1155072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.868875+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 892928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.869033+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c1e000/0x0/0x4ffc00000, data 0x1971777/0x1a50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.869191+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.250415802s of 10.249304771s, submitted: 52
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.869341+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.869512+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184509 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.869705+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 1802240 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.869875+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94756864 unmapped: 1540096 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.870087+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95019008 unmapped: 1277952 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.870234+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94363648 unmapped: 1933312 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.870406+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181477 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.870609+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.870816+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.871051+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf263/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.871302+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.871580+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186381 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.871805+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.872073+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.373756409s of 13.546041489s, submitted: 28
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.872297+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3bd/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.872557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.872704+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187283 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.872894+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.873033+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.873214+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.873372+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.873631+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186769 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.873802+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.873984+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.874242+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.047278404s of 11.189574242s, submitted: 28
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.874405+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.874556+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187191 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.874725+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.874876+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf47c/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.875036+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.875162+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.875329+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191453 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.875534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.875708+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.875873+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57b/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.809183121s of 10.000802040s, submitted: 15
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.876036+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.876214+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191277 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.876362+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.876546+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf51a/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.877132+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.877323+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.877762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190587 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.878457+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.878942+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.879320+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf518/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.646739006s of 10.003696442s, submitted: 8
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.879672+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57d/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.879939+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189925 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.880086+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.880299+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.009005+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9133 writes, 36K keys, 9133 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 9133 writes, 2084 syncs, 4.38 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2182 writes, 7209 keys, 2182 commit groups, 1.0 writes per commit group, ingest: 7.59 MB, 0.01 MB/s
                                           Interval WAL: 2182 writes, 839 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bce000/0x0/0x4ffc00000, data 0x19bf54a/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.009344+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.009654+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190955 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.009983+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.010301+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.010638+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.936868668s of 10.001495361s, submitted: 15
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.010805+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.010994+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188083 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99edc4800 session 0x55e99eaa2f00
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657400
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.011163+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a038a000 session 0x55e99f863a40
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f718000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99f657800 session 0x55e9a247c000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a038a000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.011323+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.011543+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.011708+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.011882+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188387 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.012035+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.012184+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.012361+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.810064316s of 10.060415268s, submitted: 16
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.012547+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.012674+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188195 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.012793+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf740/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.012978+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.013102+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.013246+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.013377+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188243 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.013673+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.013847+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.014172+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.816052437s of 10.073899269s, submitted: 6
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.014397+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.014599+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.014778+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.014955+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.015069+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.015349+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.015615+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.015841+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.015990+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.016281+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.016567+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.016804+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.017013+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.017200+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.017378+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.017585+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.017731+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.017942+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.018074+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.018263+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.514488220s of 19.649364471s, submitted: 4
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.018449+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.018702+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.018906+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.019119+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.019292+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf7a6/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.019551+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.019746+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.019908+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.020094+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.020387+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148883820s of 10.212854385s, submitted: 7
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf904/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.020601+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.020779+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189373 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.021006+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.021249+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.021441+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfb03/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.021642+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.021833+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191691 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.021999+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.022201+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.022444+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847572327s of 10.020855904s, submitted: 15
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.022665+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bfb98/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.022819+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191001 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.023032+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.023319+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.023579+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 1974272 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.023785+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.023943+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193495 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.024361+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.024524+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.024759+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfd88/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.024967+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765477180s of 10.907700539s, submitted: 20
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfe8a/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.025156+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194267 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.025368+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.025571+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.025762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.025993+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.026190+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193819 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.026418+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfeba/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.026590+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.026779+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.026986+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.027395+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195779 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.027569+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628976822s of 11.735780716s, submitted: 18
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 1908736 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19c0084/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.027717+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.027858+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.028014+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bc7000/0x0/0x4ffc00000, data 0x19ca4b5/0x1aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 2654208 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.028184+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208253 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.028318+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.028437+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b8c000/0x0/0x4ffc00000, data 0x1a04374/0x1ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95690752 unmapped: 2703360 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.028648+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.028903+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b52000/0x0/0x4ffc00000, data 0x1a3dfc9/0x1b1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.029222+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211041 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.029508+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896175385s of 10.457120895s, submitted: 145
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 2793472 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.029711+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 2727936 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.029921+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b15000/0x0/0x4ffc00000, data 0x1a78e5c/0x1b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96591872 unmapped: 1802240 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.030127+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 1441792 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.030509+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211969 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.030723+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.031067+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.031445+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.031843+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.032045+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213307 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 2809856 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9aa5000/0x0/0x4ffc00000, data 0x1aec27d/0x1bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.032234+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 2572288 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.032436+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.398673058s of 10.845589638s, submitted: 76
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 2564096 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.032621+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a8b000/0x0/0x4ffc00000, data 0x1b03811/0x1be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.032835+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.033242+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222355 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2195456 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.033572+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b404d5/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97402880 unmapped: 2039808 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.033762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.034079+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.034261+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b409b2/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.034427+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223335 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.034641+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.034826+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.035059+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.777762413s of 10.991305351s, submitted: 54
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.035266+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.035416+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227443 data_alloc: 218103808 data_used: 368640
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 1859584 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.035555+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.035774+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.035953+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.036122+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.036336+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235585 data_alloc: 218103808 data_used: 380928
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98017280 unmapped: 2473984 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.036548+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bb2c5a/0x1c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 2342912 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.036731+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f719c00
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 2088960 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.036937+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 14
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.176039696s of 10.044042587s, submitted: 58
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 1974272 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.037131+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9982000/0x0/0x4ffc00000, data 0x1c085b8/0x1cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 2416640 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.037265+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9983000/0x0/0x4ffc00000, data 0x1c08a7b/0x1ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241069 data_alloc: 218103808 data_used: 380928
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 1204224 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.037557+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99196928 unmapped: 1294336 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.037735+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.037958+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9960000/0x0/0x4ffc00000, data 0x1c2c683/0x1d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.038151+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 868352 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.038324+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246425 data_alloc: 218103808 data_used: 380928
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.038538+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9934000/0x0/0x4ffc00000, data 0x1c58293/0x1d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.038735+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99221504 unmapped: 1269760 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.038892+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.755167961s of 10.000792503s, submitted: 55
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 2129920 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.039091+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 2121728 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.039257+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f98eb000/0x0/0x4ffc00000, data 0x1ca0bfe/0x1d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258021 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 2015232 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.039413+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 2752512 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.039582+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1466368 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.039798+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100204544 unmapped: 1335296 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.039929+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9882000/0x0/0x4ffc00000, data 0x1d094ff/0x1deb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 2154496 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.040070+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255919 data_alloc: 218103808 data_used: 389120
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9880000/0x0/0x4ffc00000, data 0x1d0d156/0x1ded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.040276+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.040463+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 1957888 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.040648+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.610513687s of 10.004332542s, submitted: 107
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 2727936 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.040803+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 2678784 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.040958+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263663 data_alloc: 218103808 data_used: 389120
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.041127+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x1d594d1/0x1e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.041305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9831000/0x0/0x4ffc00000, data 0x1d5cab0/0x1e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.041535+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.041677+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.041900+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268991 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.042031+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 2220032 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.042171+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100548608 unmapped: 2039808 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.042305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.842511177s of 10.004651070s, submitted: 45
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97e8000/0x0/0x4ffc00000, data 0x1da2e8d/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.042434+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.042580+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97ca000/0x0/0x4ffc00000, data 0x1dc2d76/0x1ea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271023 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.042708+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.043176+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1818624 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.043380+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.043598+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.043762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 1589248 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9787000/0x0/0x4ffc00000, data 0x1e04161/0x1ee7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277019 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.043876+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.044035+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.044258+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 1523712 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.390405655s of 10.003252983s, submitted: 43
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9775000/0x0/0x4ffc00000, data 0x1e15f98/0x1ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.044410+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 1638400 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.044572+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289409 data_alloc: 218103808 data_used: 405504
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.044749+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.044898+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 2088960 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f970c000/0x0/0x4ffc00000, data 0x1e7e105/0x1f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.045090+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 2080768 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.045292+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1105920 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.045428+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [155,155], i have 153, src has [1,155]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 925696 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292805 data_alloc: 218103808 data_used: 413696
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.045572+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 1744896 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.045725+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.045876+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912647247s of 10.003127098s, submitted: 105
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f96be000/0x0/0x4ffc00000, data 0x1ec9226/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.046039+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 2678784 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.046179+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104136704 unmapped: 2646016 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300925 data_alloc: 218103808 data_used: 413696
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f925e000/0x0/0x4ffc00000, data 0x1f18ec5/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.047596+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f925e000/0x0/0x4ffc00000, data 0x1f18ec5/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.048065+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.049264+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.049544+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.050658+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310587 data_alloc: 218103808 data_used: 421888
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.051577+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.052346+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 1933312 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f9235000/0x0/0x4ffc00000, data 0x1f3f6cc/0x2028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.053046+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 1933312 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739536285s of 10.003342628s, submitted: 74
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.053689+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105365504 unmapped: 3514368 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.053847+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f91d4000/0x0/0x4ffc00000, data 0x1f9f56a/0x208a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315139 data_alloc: 218103808 data_used: 421888
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.054385+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.054578+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.054855+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.055253+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105725952 unmapped: 3153920 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f91c2000/0x0/0x4ffc00000, data 0x1faf150/0x209b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.055428+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3235840 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320365 data_alloc: 218103808 data_used: 438272
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.055632+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3178496 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f91c2000/0x0/0x4ffc00000, data 0x1faf150/0x209b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.055977+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 3162112 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.056277+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 3137536 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.824184418s of 10.002670288s, submitted: 51
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9168000/0x0/0x4ffc00000, data 0x20087cd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.056448+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.056754+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332749 data_alloc: 218103808 data_used: 438272
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.057065+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.057308+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2580480 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.057532+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x203cc51/0x212a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 2523136 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.057696+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 2457600 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.057887+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 2318336 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331605 data_alloc: 218103808 data_used: 438272
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.058091+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 2318336 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.058246+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 1269760 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.058532+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 1105920 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f90ef000/0x0/0x4ffc00000, data 0x20813ec/0x216f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.058744+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.666082382s of 10.835702896s, submitted: 36
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 1187840 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.058890+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107724800 unmapped: 1155072 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.059111+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343413 data_alloc: 218103808 data_used: 438272
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 2301952 heap: 109928448 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.059256+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 2375680 heap: 109928448 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.059436+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 3366912 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.059690+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9060000/0x0/0x4ffc00000, data 0x210f39f/0x21fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 4358144 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.059905+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 4308992 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.060073+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349605 data_alloc: 218103808 data_used: 446464
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 4308992 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.060292+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 4169728 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.060430+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 4005888 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f9005000/0x0/0x4ffc00000, data 0x2167a88/0x2259000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.060566+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2629632 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.060728+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.010040283s of 10.882234573s, submitted: 96
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2490368 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f8fe2000/0x0/0x4ffc00000, data 0x2189fb7/0x227c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.060871+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354141 data_alloc: 218103808 data_used: 446464
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 2482176 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.061156+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2351104 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.061367+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 3301376 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.061578+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 3293184 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.061773+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 3178496 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.061985+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371181 data_alloc: 218103808 data_used: 454656
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3514368 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f8f76000/0x0/0x4ffc00000, data 0x21f457f/0x22e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.062123+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 3481600 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.062242+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 3211264 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.062389+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3342336 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 sudo[277025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.062550+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f8f0e000/0x0/0x4ffc00000, data 0x225d974/0x234f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 3293184 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.832097054s of 10.369318008s, submitted: 105
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.062691+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379751 data_alloc: 218103808 data_used: 462848
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3219456 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.062918+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 2965504 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f8ef5000/0x0/0x4ffc00000, data 0x2278853/0x2369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.063120+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 2965504 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.063269+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3309568 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.063422+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3096576 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f8eb0000/0x0/0x4ffc00000, data 0x22b874a/0x23ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.063679+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386655 data_alloc: 218103808 data_used: 471040
Nov 22 06:01:52 compute-0 sudo[277025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3096576 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.063824+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 3088384 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.063970+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 2826240 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.064144+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 2801664 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f8e60000/0x0/0x4ffc00000, data 0x230aa9b/0x23fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.064305+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 2801664 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.064513+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390587 data_alloc: 218103808 data_used: 475136
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 2596864 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.064692+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 2596864 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.064863+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.478283882s of 12.892833710s, submitted: 111
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 2523136 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.065040+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 3448832 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.065184+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 3448832 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f7c8c000/0x0/0x4ffc00000, data 0x233d338/0x2431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.065322+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396817 data_alloc: 218103808 data_used: 483328
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 3432448 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.065461+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 3383296 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.065640+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 3579904 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.065974+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f7c61000/0x0/0x4ffc00000, data 0x23687ca/0x245d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 3563520 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.066135+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 3555328 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.066289+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407455 data_alloc: 218103808 data_used: 499712
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 3334144 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.066417+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x23a87c8/0x249e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 111878144 unmapped: 2244608 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.066586+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.526010513s of 10.000200272s, submitted: 99
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 2080768 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.066709+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 1966080 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.066876+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 1966080 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.067015+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412423 data_alloc: 218103808 data_used: 512000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a43000/0x0/0x4ffc00000, data 0x23e4419/0x24da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 1777664 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.067154+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 1777664 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.067304+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a43000/0x0/0x4ffc00000, data 0x23e4419/0x24da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 1712128 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.067587+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a29000/0x0/0x4ffc00000, data 0x23ff43d/0x24f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 1630208 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.067753+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 1630208 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.067866+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415755 data_alloc: 218103808 data_used: 512000
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 1613824 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 sudo[277025]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.068060+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2539520 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.068261+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.746135712s of 10.000371933s, submitted: 78
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 2072576 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a02000/0x0/0x4ffc00000, data 0x2425255/0x251c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.068403+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 2072576 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 169 ms_handle_reset con 0x55e99f719c00 session 0x55e9a26281e0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.068547+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 1744896 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.068695+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 15
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419647 data_alloc: 218103808 data_used: 520192
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 1728512 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.068934+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 1728512 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.069095+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.069235+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6987000/0x0/0x4ffc00000, data 0x249fe4d/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.069365+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.069566+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423983 data_alloc: 218103808 data_used: 520192
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.069735+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.069956+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.070148+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6987000/0x0/0x4ffc00000, data 0x249fe4d/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.937213898s of 11.105960846s, submitted: 234
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.070351+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.070535+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424925 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.070696+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.070878+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6983000/0x0/0x4ffc00000, data 0x24a18b0/0x259a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.071045+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.071198+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.071394+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.071508+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.071730+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.071881+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.072036+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.072199+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.072425+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.072654+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.072860+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.073053+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.073296+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.073553+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.073799+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.073954+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.074084+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.074242+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.074422+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.074592+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.074751+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.074909+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.075059+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.075188+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.075363+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.075628+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.075818+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.075988+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.076224+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.076394+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.076637+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.077619+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.078786+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.079048+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.079249+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.079959+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.080582+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.080993+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.081504+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.081929+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.082317+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.082500+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.082752+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.082940+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.083247+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.083539+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.246944427s of 50.285312653s, submitted: 15
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 ms_handle_reset con 0x55e9a24c1c00 session 0x55e9a24aeb40
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.083715+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 2064384 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.083964+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 16
Nov 22 06:01:52 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.084131+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.084370+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.084597+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.084789+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.084958+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.085139+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.085375+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.085540+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.085693+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.085851+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.086013+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.086168+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.086356+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.086573+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.086762+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.086975+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.087166+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.087334+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.087534+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.087749+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.087925+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.088078+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.088253+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.088433+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.088545+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.088708+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.088949+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.089085+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.089202+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.089327+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.089443+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:17.089594+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:18.089788+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 2023424 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}'
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:19.089927+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 2179072 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:20.090068+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:52 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:52 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113926144 unmapped: 2293760 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:01:52 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:21.090191+0000)
Nov 22 06:01:52 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:01:52 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 2269184 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:52 compute-0 ceph-osd[90784]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:01:52 compute-0 sudo[277063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:52 compute-0 sudo[277063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:52 compute-0 sudo[277063]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:52 compute-0 rsyslogd[1005]: imjournal from <np0005531754:ceph-osd>: begin to drop messages due to rate-limiting
Nov 22 06:01:52 compute-0 sudo[277118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:01:52 compute-0 sudo[277118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721718517' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.73538053 +0000 UTC m=+0.055005333 container create 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 06:01:52 compute-0 systemd[1]: Started libpod-conmon-9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c.scope.
Nov 22 06:01:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.709441906 +0000 UTC m=+0.029066749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.82916901 +0000 UTC m=+0.148793853 container init 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 06:01:52 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 22 06:01:52 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521344141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.835763956 +0000 UTC m=+0.155388759 container start 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:01:52 compute-0 silly_euclid[277244]: 167 167
Nov 22 06:01:52 compute-0 systemd[1]: libpod-9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c.scope: Deactivated successfully.
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.847703406 +0000 UTC m=+0.167328219 container attach 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.848437935 +0000 UTC m=+0.168062728 container died 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a87eca2bb9a9b96781283e17e905a643a5c857cf5b8e9d1956e0ad9edce8e9a-merged.mount: Deactivated successfully.
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14629 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:52 compute-0 podman[277224]: 2025-11-22 06:01:52.968954661 +0000 UTC m=+0.288579484 container remove 9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:01:52 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:52 compute-0 systemd[1]: libpod-conmon-9d3badab19904e17d23bba16efb0c4edc89c5f5813d82cbefd78260645de4e0c.scope: Deactivated successfully.
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/952415537' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2853278840' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1721718517' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2521344141' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:01:53 compute-0 podman[277320]: 2025-11-22 06:01:53.185810815 +0000 UTC m=+0.053376490 container create ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 06:01:53 compute-0 systemd[1]: Started libpod-conmon-ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd.scope.
Nov 22 06:01:53 compute-0 podman[277320]: 2025-11-22 06:01:53.158179736 +0000 UTC m=+0.025745421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 06:01:53 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2173830331' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 06:01:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:53 compute-0 podman[277320]: 2025-11-22 06:01:53.285829642 +0000 UTC m=+0.153395287 container init ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 06:01:53 compute-0 podman[277320]: 2025-11-22 06:01:53.293729233 +0000 UTC m=+0.161294868 container start ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 06:01:53 compute-0 podman[277320]: 2025-11-22 06:01:53.301349987 +0000 UTC m=+0.168915622 container attach ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14633 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 22 06:01:53 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596090385' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 06:01:53 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14637 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14639 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mon[75840]: from='client.14625 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mon[75840]: from='client.14629 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mon[75840]: pgmap v1274: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:54 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2173830331' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2596090385' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14641 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 wizardly_ramanujan[277337]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:01:54 compute-0 wizardly_ramanujan[277337]: --> relative data size: 1.0
Nov 22 06:01:54 compute-0 wizardly_ramanujan[277337]: --> All data devices are unavailable
Nov 22 06:01:54 compute-0 systemd[1]: libpod-ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd.scope: Deactivated successfully.
Nov 22 06:01:54 compute-0 podman[277320]: 2025-11-22 06:01:54.356306541 +0000 UTC m=+1.223872176 container died ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14645 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14643 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aa6c8f7f0b58d8500efc582376bf4d4b0156ab5575f4722d42abb7177c7a9cc-merged.mount: Deactivated successfully.
Nov 22 06:01:54 compute-0 podman[277320]: 2025-11-22 06:01:54.572164319 +0000 UTC m=+1.439729954 container remove ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_ramanujan, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 06:01:54 compute-0 systemd[1]: libpod-conmon-ea34b8aee9fd357fd3b929d1e902d5595c1497e96b34a961b262baad18cfd8fd.scope: Deactivated successfully.
Nov 22 06:01:54 compute-0 sudo[277118]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:54 compute-0 sudo[277547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:54 compute-0 sudo[277547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:54 compute-0 sudo[277547]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:54 compute-0 sudo[277576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:01:54 compute-0 sudo[277576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:54 compute-0 sudo[277576]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:54 compute-0 sudo[277637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:54 compute-0 sudo[277637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:54 compute-0 sudo[277637]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:54 compute-0 sudo[277662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:01:54 compute-0 sudo[277662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:54 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:55 compute-0 ceph-mon[75840]: from='client.14633 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:55 compute-0 ceph-mon[75840]: from='client.14637 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:55 compute-0 ceph-mon[75840]: from='client.14639 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.157535085 +0000 UTC m=+0.049651140 container create 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:01:55 compute-0 systemd[1]: Started libpod-conmon-087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488.scope.
Nov 22 06:01:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.230713954 +0000 UTC m=+0.122830049 container init 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.136648696 +0000 UTC m=+0.028764771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.238915763 +0000 UTC m=+0.131031828 container start 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:01:55 compute-0 lucid_maxwell[277775]: 167 167
Nov 22 06:01:55 compute-0 systemd[1]: libpod-087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488.scope: Deactivated successfully.
Nov 22 06:01:55 compute-0 conmon[277775]: conmon 087008bacb3057eb6ddb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488.scope/container/memory.events
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.249224779 +0000 UTC m=+0.141340854 container attach 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.249594628 +0000 UTC m=+0.141710683 container died 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ff2ac4acbda560af30445c7d80eb5983ae08f5a59682fa072e54933dcebeaa5-merged.mount: Deactivated successfully.
Nov 22 06:01:55 compute-0 podman[277756]: 2025-11-22 06:01:55.30791476 +0000 UTC m=+0.200030815 container remove 087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:01:55 compute-0 systemd[1]: libpod-conmon-087008bacb3057eb6ddb8a0223f826185faf892eea94d471ffeb1efded866488.scope: Deactivated successfully.
Nov 22 06:01:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 22 06:01:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2880392947' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 06:01:55 compute-0 podman[277809]: 2025-11-22 06:01:55.532072479 +0000 UTC m=+0.074985258 container create e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 06:01:55 compute-0 systemd[1]: Started libpod-conmon-e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b.scope.
Nov 22 06:01:55 compute-0 podman[277809]: 2025-11-22 06:01:55.500695699 +0000 UTC m=+0.043608498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8f2c100291afe9ec474fa6687c013a3ec80c2f41f03a7724ef13c66fe7f5f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8f2c100291afe9ec474fa6687c013a3ec80c2f41f03a7724ef13c66fe7f5f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8f2c100291afe9ec474fa6687c013a3ec80c2f41f03a7724ef13c66fe7f5f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8f2c100291afe9ec474fa6687c013a3ec80c2f41f03a7724ef13c66fe7f5f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:55 compute-0 podman[277809]: 2025-11-22 06:01:55.636771381 +0000 UTC m=+0.179684170 container init e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:01:55 compute-0 podman[277809]: 2025-11-22 06:01:55.650878939 +0000 UTC m=+0.193791728 container start e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 06:01:55 compute-0 podman[277809]: 2025-11-22 06:01:55.656036607 +0000 UTC m=+0.198949386 container attach e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 06:01:55 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 22 06:01:55 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58110649' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.14641 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.14645 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.14643 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.14649 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: pgmap v1275: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2880392947' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/58110649' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 06:01:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334916369' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:01:56 compute-0 loving_khorana[277847]: {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     "0": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "devices": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "/dev/loop3"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             ],
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_name": "ceph_lv0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_size": "21470642176",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "name": "ceph_lv0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "tags": {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_name": "ceph",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.crush_device_class": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.encrypted": "0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_id": "0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.vdo": "0"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             },
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "vg_name": "ceph_vg0"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         }
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     ],
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     "1": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "devices": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "/dev/loop4"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             ],
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_name": "ceph_lv1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_size": "21470642176",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "name": "ceph_lv1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "tags": {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_name": "ceph",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.crush_device_class": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.encrypted": "0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_id": "1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.vdo": "0"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             },
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "vg_name": "ceph_vg1"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         }
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     ],
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     "2": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "devices": [
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "/dev/loop5"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             ],
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_name": "ceph_lv2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_size": "21470642176",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "name": "ceph_lv2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "tags": {
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.cluster_name": "ceph",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.crush_device_class": "",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.encrypted": "0",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osd_id": "2",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:                 "ceph.vdo": "0"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             },
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "type": "block",
Nov 22 06:01:56 compute-0 loving_khorana[277847]:             "vg_name": "ceph_vg2"
Nov 22 06:01:56 compute-0 loving_khorana[277847]:         }
Nov 22 06:01:56 compute-0 loving_khorana[277847]:     ]
Nov 22 06:01:56 compute-0 loving_khorana[277847]: }
Nov 22 06:01:56 compute-0 systemd[1]: libpod-e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b.scope: Deactivated successfully.
Nov 22 06:01:56 compute-0 conmon[277847]: conmon e106a3c28462b603eb5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b.scope/container/memory.events
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:27.073720+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 1802240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 846605 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:28.073846+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 1802240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:29.074023+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:29:58.345647+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.1b deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:29:58.359674+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.1b deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 1802240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 123) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:29:58.345647+0000 osd.0 (osd.0) 122 : cluster [DBG] 3.1b deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:29:58.359674+0000 osd.0 (osd.0) 123 : cluster [DBG] 3.1b deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:30.074264+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66306048 unmapped: 1794048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:31.074598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:00.320544+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:00.334793+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66314240 unmapped: 1785856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 125) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:00.320544+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:00.334793+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:32.074840+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 1777664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848901 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:33.075027+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66322432 unmapped: 1777664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:34.075186+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:03.367812+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:03.381950+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.018386841s of 10.048163414s, submitted: 8
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 127) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:03.367812+0000 osd.0 (osd.0) 126 : cluster [DBG] 7.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:03.381950+0000 osd.0 (osd.0) 127 : cluster [DBG] 7.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:35.075400+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:04.410374+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:04.424420+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fceb5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2bcf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 129) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:04.410374+0000 osd.0 (osd.0) 128 : cluster [DBG] 2.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:04.424420+0000 osd.0 (osd.0) 129 : cluster [DBG] 2.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:36.075639+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66338816 unmapped: 1761280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:37.075839+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851195 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:38.076109+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66347008 unmapped: 1753088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:39.076357+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:08.374689+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:08.388809+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66355200 unmapped: 1744896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 131) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:08.374689+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:08.388809+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:40.076704+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:41.077071+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:10.394458+0000 osd.0 (osd.0) 132 : cluster [DBG] 10.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:10.412091+0000 osd.0 (osd.0) 133 : cluster [DBG] 10.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66371584 unmapped: 1728512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 133) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:10.394458+0000 osd.0 (osd.0) 132 : cluster [DBG] 10.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:10.412091+0000 osd.0 (osd.0) 133 : cluster [DBG] 10.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:42.077344+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 1712128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853491 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:43.077540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66387968 unmapped: 1712128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:44.077824+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:13.455414+0000 osd.0 (osd.0) 134 : cluster [DBG] 10.8 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:13.469534+0000 osd.0 (osd.0) 135 : cluster [DBG] 10.8 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.056404114s of 10.087099075s, submitted: 8
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 135) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:13.455414+0000 osd.0 (osd.0) 134 : cluster [DBG] 10.8 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:13.469534+0000 osd.0 (osd.0) 135 : cluster [DBG] 10.8 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:45.078041+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:14.497304+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.15 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:14.514964+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.15 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 137) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:14.497304+0000 osd.0 (osd.0) 136 : cluster [DBG] 10.15 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:14.514964+0000 osd.0 (osd.0) 137 : cluster [DBG] 10.15 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:46.078376+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66396160 unmapped: 1703936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:47.078577+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:16.488849+0000 osd.0 (osd.0) 138 : cluster [DBG] 10.7 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:16.503006+0000 osd.0 (osd.0) 139 : cluster [DBG] 10.7 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 1687552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858084 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 139) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:16.488849+0000 osd.0 (osd.0) 138 : cluster [DBG] 10.7 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:16.503006+0000 osd.0 (osd.0) 139 : cluster [DBG] 10.7 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:48.078861+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:17.505027+0000 osd.0 (osd.0) 140 : cluster [DBG] 10.4 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:17.519140+0000 osd.0 (osd.0) 141 : cluster [DBG] 10.4 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 1687552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 141) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:17.505027+0000 osd.0 (osd.0) 140 : cluster [DBG] 10.4 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:17.519140+0000 osd.0 (osd.0) 141 : cluster [DBG] 10.4 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:49.079124+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66412544 unmapped: 1687552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:50.079294+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 1662976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:51.079440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:20.451884+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:20.465994+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 1662976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 143) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:20.451884+0000 osd.0 (osd.0) 142 : cluster [DBG] 10.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:20.465994+0000 osd.0 (osd.0) 143 : cluster [DBG] 10.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:52.079643+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860382 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:53.079876+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:22.467853+0000 osd.0 (osd.0) 144 : cluster [DBG] 10.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:22.482072+0000 osd.0 (osd.0) 145 : cluster [DBG] 10.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 145) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:22.467853+0000 osd.0 (osd.0) 144 : cluster [DBG] 10.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:22.482072+0000 osd.0 (osd.0) 145 : cluster [DBG] 10.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:54.080117+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66445312 unmapped: 1654784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:55.080308+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 1646592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.903209686s of 10.939954758s, submitted: 10
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:56.080516+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:25.437749+0000 osd.0 (osd.0) 146 : cluster [DBG] 10.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:25.451814+0000 osd.0 (osd.0) 147 : cluster [DBG] 10.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66453504 unmapped: 1646592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 147) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:25.437749+0000 osd.0 (osd.0) 146 : cluster [DBG] 10.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:25.451814+0000 osd.0 (osd.0) 147 : cluster [DBG] 10.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:57.080704+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:26.390929+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:26.408766+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 1638400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863827 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:58.080912+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 4 last_log 151 sent 149 num 4 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:27.380276+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.1e deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:27.394255+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.1e deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 149) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:26.390929+0000 osd.0 (osd.0) 148 : cluster [DBG] 10.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:26.408766+0000 osd.0 (osd.0) 149 : cluster [DBG] 10.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:29:59.081125+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 4 last_log 153 sent 151 num 4 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:28.341679+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:28.359600+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 151) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:27.380276+0000 osd.0 (osd.0) 150 : cluster [DBG] 10.1e deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:27.394255+0000 osd.0 (osd.0) 151 : cluster [DBG] 10.1e deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 153) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:28.341679+0000 osd.0 (osd.0) 152 : cluster [DBG] 10.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:28.359600+0000 osd.0 (osd.0) 153 : cluster [DBG] 10.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 1630208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:00.081361+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 1622016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:01.081578+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:02.081889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:31.298968+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:31.313113+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 155) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:31.298968+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:31.313113+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 1613824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867273 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:03.082119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:32.348589+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:32.362511+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 157) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:32.348589+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.17 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:32.362511+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.17 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 1597440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:04.082363+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 1589248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:05.082548+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 1589248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:06.082844+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 1589248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:07.083036+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 1589248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.954401970s of 12.000329018s, submitted: 12
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868421 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:08.083231+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:37.438040+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:37.452106+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 159) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:37.438040+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.14 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:37.452106+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.14 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:09.083424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 1581056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:10.083629+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:39.419964+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:39.433956+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 161) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:39.419964+0000 osd.0 (osd.0) 160 : cluster [DBG] 8.c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:39.433956+0000 osd.0 (osd.0) 161 : cluster [DBG] 8.c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:11.083827+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:40.448433+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.1 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:40.462417+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.1 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 163) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:40.448433+0000 osd.0 (osd.0) 162 : cluster [DBG] 11.1 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:40.462417+0000 osd.0 (osd.0) 163 : cluster [DBG] 11.1 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:12.083991+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66527232 unmapped: 1572864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870716 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:13.084220+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 1564672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:14.084373+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:43.433372+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:43.446928+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 165) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:43.433372+0000 osd.0 (osd.0) 164 : cluster [DBG] 11.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:43.446928+0000 osd.0 (osd.0) 165 : cluster [DBG] 11.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 1556480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:15.084560+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:44.390192+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:44.404003+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 167) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:44.390192+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:44.404003+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:16.084793+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 1548288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:17.084981+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:46.435912+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:46.450061+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 169) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:46.435912+0000 osd.0 (osd.0) 168 : cluster [DBG] 11.e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:46.450061+0000 osd.0 (osd.0) 169 : cluster [DBG] 11.e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.978877068s of 10.026175499s, submitted: 12
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:18.085530+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:47.464731+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:47.478336+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875307 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 171) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:47.464731+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:47.478336+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:19.085844+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:48.460389+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:48.480860+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 173) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:48.460389+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:48.480860+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:20.086084+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66576384 unmapped: 1523712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:21.086225+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:22.086357+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:23.086559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876454 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:24.086741+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:25.086887+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:26.087118+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:27.087685+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:28.087824+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:57.292965+0000 osd.0 (osd.0) 174 : cluster [DBG] 8.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:57.307075+0000 osd.0 (osd.0) 175 : cluster [DBG] 8.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877601 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.806848526s of 10.837247849s, submitted: 6
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 175) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:57.292965+0000 osd.0 (osd.0) 174 : cluster [DBG] 8.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:57.307075+0000 osd.0 (osd.0) 175 : cluster [DBG] 8.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:29.088051+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:58.301475+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.4 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:30:58.315571+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.4 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 177) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:58.301475+0000 osd.0 (osd.0) 176 : cluster [DBG] 11.4 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:30:58.315571+0000 osd.0 (osd.0) 177 : cluster [DBG] 11.4 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 1482752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:30.088222+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66617344 unmapped: 1482752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:31.088368+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:32.088512+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:33.088699+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878749 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:34.088919+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:03.209144+0000 osd.0 (osd.0) 178 : cluster [DBG] 8.6 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:03.223276+0000 osd.0 (osd.0) 179 : cluster [DBG] 8.6 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 179) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:03.209144+0000 osd.0 (osd.0) 178 : cluster [DBG] 8.6 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:03.223276+0000 osd.0 (osd.0) 179 : cluster [DBG] 8.6 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:35.089221+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:36.089400+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:05.254304+0000 osd.0 (osd.0) 180 : cluster [DBG] 11.6 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:05.268438+0000 osd.0 (osd.0) 181 : cluster [DBG] 11.6 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 181) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:05.254304+0000 osd.0 (osd.0) 180 : cluster [DBG] 11.6 deep-scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:05.268438+0000 osd.0 (osd.0) 181 : cluster [DBG] 11.6 deep-scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:37.089718+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:06.225341+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.18 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:06.243032+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.18 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 183) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:06.225341+0000 osd.0 (osd.0) 182 : cluster [DBG] 8.18 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:06.243032+0000 osd.0 (osd.0) 183 : cluster [DBG] 8.18 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:38.090223+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882192 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:39.090386+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:40.090533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.900626183s of 11.931916237s, submitted: 8
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:41.090777+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:10.233406+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:10.247513+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 185) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:10.233406+0000 osd.0 (osd.0) 184 : cluster [DBG] 8.1f scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:10.247513+0000 osd.0 (osd.0) 185 : cluster [DBG] 8.1f scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:42.091049+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:11.262365+0000 osd.0 (osd.0) 186 : cluster [DBG] 8.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:11.276455+0000 osd.0 (osd.0) 187 : cluster [DBG] 8.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 187) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:11.262365+0000 osd.0 (osd.0) 186 : cluster [DBG] 8.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:11.276455+0000 osd.0 (osd.0) 187 : cluster [DBG] 8.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:43.091328+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884488 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:44.091533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:45.091901+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:46.092275+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 1417216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:47.092407+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 1417216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:48.092564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:17.161414+0000 osd.0 (osd.0) 188 : cluster [DBG] 8.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:17.175539+0000 osd.0 (osd.0) 189 : cluster [DBG] 8.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885635 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 189) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:17.161414+0000 osd.0 (osd.0) 188 : cluster [DBG] 8.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:17.175539+0000 osd.0 (osd.0) 189 : cluster [DBG] 8.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66682880 unmapped: 1417216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:49.092756+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:50.092947+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:19.252881+0000 osd.0 (osd.0) 190 : cluster [DBG] 11.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:19.266975+0000 osd.0 (osd.0) 191 : cluster [DBG] 11.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 191) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:19.252881+0000 osd.0 (osd.0) 190 : cluster [DBG] 11.10 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:19.266975+0000 osd.0 (osd.0) 191 : cluster [DBG] 11.10 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:51.093265+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 1368064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:52.093439+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66732032 unmapped: 1368064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:53.093696+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886784 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:54.093833+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:55.094044+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.001798630s of 15.033724785s, submitted: 8
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:56.094239+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:25.267283+0000 osd.0 (osd.0) 192 : cluster [DBG] 11.19 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:25.281336+0000 osd.0 (osd.0) 193 : cluster [DBG] 11.19 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 193) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:25.267283+0000 osd.0 (osd.0) 192 : cluster [DBG] 11.19 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:25.281336+0000 osd.0 (osd.0) 193 : cluster [DBG] 11.19 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:57.094581+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:58.094859+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889081 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:30:59.095017+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:28.231646+0000 osd.0 (osd.0) 194 : cluster [DBG] 8.1a scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:28.245400+0000 osd.0 (osd.0) 195 : cluster [DBG] 8.1a scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 1327104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 195) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:28.231646+0000 osd.0 (osd.0) 194 : cluster [DBG] 8.1a scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:28.245400+0000 osd.0 (osd.0) 195 : cluster [DBG] 8.1a scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:00.095198+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 1310720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:01.095339+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:30.185526+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.11 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:30.220677+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.11 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 197) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:30.185526+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.11 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:30.220677+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.11 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:02.095533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66797568 unmapped: 1302528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:03.095682+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 891376 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66805760 unmapped: 1294336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:04.095822+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:33.188347+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:33.227139+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 199) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:33.188347+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.9 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:33.227139+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.9 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:05.095987+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.987623215s of 10.022075653s, submitted: 9
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:06.096192+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:35.246863+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:35.292837+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 201) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:35.246863+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.3 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:35.292837+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.3 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:07.096403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:08.096540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892523 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:09.096707+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:10.096867+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:11.096995+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:12.097143+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:41.226356+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:41.268714+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 203) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:41.226356+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:41.268714+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:13.097281+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:42.240868+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:42.276168+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894818 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 205) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:42.240868+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:42.276168+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:14.097460+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:15.097645+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:44.255085+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.5 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:44.293911+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.5 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 207) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:44.255085+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.5 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:44.293911+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.5 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 podman[277923]: 2025-11-22 06:01:56.442763122 +0000 UTC m=+0.025085482 container died e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:16.097820+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:17.097949+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:18.098121+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 895965 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:19.098263+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.862318993s of 13.891262054s, submitted: 7
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66863104 unmapped: 1236992 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:20.098419+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:49.180553+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:49.212297+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 209) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:49.180553+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:49.212297+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:21.098705+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 3 last_log 212 sent 209 num 3 unsent 3 sending 3
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:50.135390+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:50.177774+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:51.090149+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 1204224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 212) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:50.135390+0000 osd.0 (osd.0) 210 : cluster [DBG] 9.d scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:50.177774+0000 osd.0 (osd.0) 211 : cluster [DBG] 9.d scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:51.090149+0000 osd.0 (osd.0) 212 : cluster [DBG] 9.1b scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:22.098978+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 1 last_log 213 sent 212 num 1 unsent 1 sending 1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:51.114799+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66895872 unmapped: 1204224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 213) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:51.114799+0000 osd.0 (osd.0) 213 : cluster [DBG] 9.1b scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:23.099145+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:52.139810+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:52.175121+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 901703 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 215) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:52.139810+0000 osd.0 (osd.0) 214 : cluster [DBG] 9.16 scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:52.175121+0000 osd.0 (osd.0) 215 : cluster [DBG] 9.16 scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:24.099357+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:53.102346+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.1c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:53.141120+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.1c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66904064 unmapped: 1196032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 217) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:53.102346+0000 osd.0 (osd.0) 216 : cluster [DBG] 9.1c scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:53.141120+0000 osd.0 (osd.0) 217 : cluster [DBG] 9.1c scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:25.099563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:26.099718+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:55.117060+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.1e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  will send 2025-11-22T05:31:55.152535+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.1e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client handle_log_ack log(last 219) v1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:55.117060+0000 osd.0 (osd.0) 218 : cluster [DBG] 9.1e scrub starts
Nov 22 06:01:56 compute-0 ceph-osd[89779]: log_client  logged 2025-11-22T05:31:55.152535+0000 osd.0 (osd.0) 219 : cluster [DBG] 9.1e scrub ok
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:27.100349+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:28.100467+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 1171456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:29.100640+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66928640 unmapped: 1171456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:30.100785+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:31.100948+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:32.101058+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:33.101199+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:34.101360+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:35.101535+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:36.101685+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:37.101832+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:38.101989+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:39.102239+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:40.102403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:41.102587+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:42.102770+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:43.102964+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:44.103122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:45.103300+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:46.103531+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:47.103698+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:48.103835+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:49.103992+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1114112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:50.104141+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:51.104320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:52.104528+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:53.104659+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:54.104847+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67010560 unmapped: 1089536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:55.105006+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67018752 unmapped: 1081344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:56.105190+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:57.105333+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:58.105531+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:31:59.105697+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:00.105844+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:01.106017+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:02.106207+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:03.106357+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:04.106580+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:05.106736+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:06.106942+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67067904 unmapped: 1032192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:07.107092+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:08.107216+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:09.107394+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:10.107595+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:11.107791+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:12.107955+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:13.108173+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:14.108311+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:15.108496+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:16.108685+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:17.108843+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:18.109001+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:19.109212+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:20.109419+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:21.109608+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:22.109802+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:23.110020+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:24.110202+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:25.110373+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:26.110572+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:27.110737+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:28.110878+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:29.111015+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:30.111159+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:31.111342+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:32.111540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:33.111721+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:34.111952+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:35.112109+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:36.112290+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:37.112529+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:38.112666+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:39.112817+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:40.112964+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:41.113141+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:42.113308+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:43.113432+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:44.113771+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:45.113914+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:46.114119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:47.114324+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:48.114538+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:49.114875+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:50.115007+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:51.115158+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:52.115271+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:53.115535+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:54.115662+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:55.115818+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:56.116011+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:57.116159+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:58.116314+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:32:59.116498+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:00.116647+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:01.116805+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:02.116998+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:03.117146+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:04.117272+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:05.117416+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:06.117576+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:07.117727+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:08.117897+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:09.118065+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:10.118231+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:11.118409+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:12.118551+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:13.118774+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:14.118908+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:15.119051+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:16.119233+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:17.119392+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:18.119628+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:19.119803+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:20.119958+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:21.120133+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:22.120271+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:23.120520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:24.120666+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:25.120854+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:26.121082+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:27.121242+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:28.121387+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:29.121528+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:30.121675+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:31.121835+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:32.121959+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:33.122134+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:34.122332+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:35.122560+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:36.122786+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:37.122921+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:38.123069+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:39.123189+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:40.123321+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:41.123522+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:42.123656+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:43.123827+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:44.123964+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:45.124110+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:46.124320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:47.124460+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:48.124649+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:49.124822+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:50.124989+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:51.125122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:52.125351+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:53.125564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:54.125707+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:55.125863+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:56.126038+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:57.126207+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:58.126384+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:33:59.126550+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:00.126739+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:01.126899+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:02.127062+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:03.127289+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:04.127443+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:05.127624+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:06.127807+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:07.128015+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:08.128175+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:09.128345+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:10.128509+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:11.128665+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:12.128891+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:13.129175+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:14.129364+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:15.129533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:16.129759+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:17.129884+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:18.130046+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:19.130215+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:20.130371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:21.130561+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:22.130802+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:23.130937+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:24.131145+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:25.131308+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:26.132167+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:27.132298+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:28.132429+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:29.132544+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:30.132685+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:31.132844+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:32.133036+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:33.133162+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:34.133294+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:35.133449+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:36.133668+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:37.133834+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:38.133992+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:39.134164+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:40.134338+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:41.134562+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:42.134801+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:43.134930+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:44.135088+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:45.135233+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:46.135374+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:47.135514+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:48.135647+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:49.135800+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:50.135929+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:51.136062+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:52.136236+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:53.136379+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:54.136515+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:55.136652+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:56.136866+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:57.137017+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:58.137153+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:34:59.137313+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:00.137538+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:01.137694+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:02.137855+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:03.137993+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:04.138170+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:05.138312+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:06.138521+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:07.138714+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:08.138913+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:09.139092+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:10.139255+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:11.139440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5414 writes, 23K keys, 5414 commit groups, 1.0 writes per commit group, ingest: 18.51 MB, 0.03 MB/s
                                           Interval WAL: 5414 writes, 774 syncs, 6.99 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:12.139550+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:13.139709+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:14.139970+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:15.140232+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:16.140388+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:17.140561+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:18.140794+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:19.140941+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:20.141071+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 385024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:21.141248+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:22.141526+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:23.141676+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:24.141856+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:25.142036+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:26.142250+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:27.142397+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:28.142734+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:29.142934+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:30.143122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:31.143266+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:32.143396+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:33.143585+0000)
Nov 22 06:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8f2c100291afe9ec474fa6687c013a3ec80c2f41f03a7724ef13c66fe7f5f5-merged.mount: Deactivated successfully.
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:34.143742+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:35.143889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:36.144073+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:37.144221+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:38.144351+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:39.144512+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:40.144648+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:41.144792+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:42.144954+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:43.145120+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:44.145278+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:45.145461+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:46.145750+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:47.145897+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:48.146086+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:49.146230+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:50.146359+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:51.146491+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:52.146598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:53.146715+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:54.146829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:55.146948+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:56.147116+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:57.147237+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:58.147344+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:35:59.147555+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:00.147703+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:01.147844+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:02.147985+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:03.148165+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:04.148320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:05.148444+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:06.148627+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:07.148747+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:08.148861+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:09.149010+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:10.149244+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:11.149424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:12.149523+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:13.149742+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:14.149886+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:15.150017+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:16.150205+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:17.150410+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:18.150558+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:19.151617+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:20.151755+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:21.151978+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:22.152211+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:23.152407+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:24.152645+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:25.152911+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:26.153087+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:27.153281+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:28.153497+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:29.153722+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:30.153962+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:31.154113+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:32.154282+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:33.154424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:34.154590+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.154741+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.154907+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.155077+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.155393+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.155515+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.155754+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.156012+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.156347+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.156587+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.156782+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 325.705718994s of 325.754638672s, submitted: 12
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.156970+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.157204+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.157445+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.157760+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.158051+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.158354+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.158801+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.159106+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.159894+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.160315+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.160599+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.161025+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.161430+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.161804+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.162135+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.162528+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.162920+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.163199+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.163624+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.163999+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.164240+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.164521+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.164731+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.164868+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.165048+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.165236+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.165511+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.165658+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.165823+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.166022+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.166183+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.166442+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.166616+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.166851+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.167005+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.167208+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.167386+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.167557+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.167734+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.167886+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.168053+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.168259+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.168435+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.168622+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.170061+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.170694+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.170953+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.171176+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.171344+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.171566+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.171775+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.172014+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.172165+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.172292+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.172518+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.172655+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.172802+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.172930+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.173069+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.173258+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.173401+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.173587+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.173918+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.174067+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.174228+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.174404+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.174542+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.174681+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.174867+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.174998+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.175127+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.175285+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.175378+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.175511+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.175649+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.175829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.175988+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.176133+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.176326+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.176574+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.176770+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.176994+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.177158+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.177334+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.177567+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.177764+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.177922+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.178156+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.178419+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.178677+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.178886+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.179113+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.179398+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.179630+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.179872+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.180117+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.180345+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.180526+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.180690+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.180829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.180968+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.181181+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.181369+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.181540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.181690+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.181829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.181944+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.182075+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.182334+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.182840+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.182967+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.183132+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.183285+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.183458+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.183840+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.183988+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.184154+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.184364+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.184502+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.184625+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.184761+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.184964+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.185338+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.185512+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.185636+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.185770+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.185938+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.186109+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.186274+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.186421+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.186582+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.186765+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.186887+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.186991+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.187119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 podman[277923]: 2025-11-22 06:01:56.505689926 +0000 UTC m=+0.088012276 container remove e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_khorana, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.187259+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.187380+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.187621+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.187817+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.187975+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.188129+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.188307+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.188452+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.188574+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.188772+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.188946+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.189127+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.189276+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.189453+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.189615+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.189763+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.189942+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.190111+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.190535+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.190710+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.190855+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.191038+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.191242+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.191423+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.191547+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.191724+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.191902+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.192119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.192238+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.192437+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.192544+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.192676+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.192798+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.192951+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.193137+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.193303+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.193512+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.193665+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.194032+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.194207+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.194420+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.194606+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.194807+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.195007+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.195215+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.195386+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.195531+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.195672+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.195821+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.195994+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.196161+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.196333+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.196517+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.196686+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 systemd[1]: libpod-conmon-e106a3c28462b603eb5b327e8d6d83968975cac96c9c4b79093ab31959f1368b.scope: Deactivated successfully.
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.196830+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.196967+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.197559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.197733+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.197884+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.198046+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.198234+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.198394+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.198572+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.198721+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.198882+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.199079+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.199334+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.199538+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.199731+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.199932+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.200090+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.200256+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.200420+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.200625+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.200768+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.200916+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.201124+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.201345+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.202175+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.202709+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc ms_handle_reset ms_handle_reset con 0x56464d217c00
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: get_auth_request con 0x56464ffd3800 auth_method 0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.202855+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.202972+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.203101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.203272+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.203410+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.203553+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 ms_handle_reset con 0x56464dfb5000 session 0x56464d1ab4a0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffce400
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.203750+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.203913+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.204017+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.204173+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.204320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.204540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.204688+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.204858+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.205020+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.205169+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.205319+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.205513+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.205689+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.205872+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.206140+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.206288+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.206510+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.207192+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.207421+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.207630+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.207823+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.208023+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.208182+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.208373+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.208668+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.208833+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.209036+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.209192+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.209388+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.209535+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.209734+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.209927+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.210049+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.210179+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.210362+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.210603+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.210832+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.210999+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.211173+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.211354+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.211604+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.211832+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.212007+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.212169+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.212401+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.212584+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.212784+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.212981+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.213130+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.213296+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.213495+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.213698+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.213852+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.214062+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.214572+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.214730+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.214881+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.215040+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.215180+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.215340+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.215529+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.215661+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.215887+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.216052+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.216216+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.216361+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.216586+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.216731+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.216890+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.217056+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.217301+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.217451+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 ms_handle_reset con 0x56464e5b8000 session 0x56464e6b4000
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffce800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.217657+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.217799+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.217963+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.218113+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.218299+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.218445+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.218531+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.218701+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.218846+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.218970+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.219143+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.219326+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.219532+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.219684+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.219852+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.220064+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.220209+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.220397+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.220548+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.220724+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.220848+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.221051+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.221235+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.221367+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.221529+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.221730+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.221888+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.222075+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.222277+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.222435+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.222588+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.222728+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.222947+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.223114+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.223320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.223448+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.223598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.223718+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.223937+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.224105+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.224263+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.224418+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.224583+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.224731+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.224872+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.224988+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.225141+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.225314+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.225533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.225684+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.225800+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.225971+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.226101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.226254+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.226424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.226559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.226716+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.226883+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.227068+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.227239+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.227393+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.227538+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.227675+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.227829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.227998+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.228153+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.228280+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.228443+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.228678+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.228855+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.229008+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.229212+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.229385+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.229562+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.229722+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.229899+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.230030+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.230172+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.230324+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.230520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.230653+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.230808+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.230968+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.231119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.231243+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.231408+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.231573+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.231753+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 sudo[277662]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.231917+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.232031+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.232178+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.232377+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.232541+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.232695+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.232873+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.233071+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.233245+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.233413+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.233564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.233696+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.233830+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.233984+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.234124+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.234277+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.234456+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.234691+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.234841+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.235016+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.235233+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.235424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.235555+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.235737+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.236097+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.236277+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.236421+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.236711+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.236867+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.237048+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.237209+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.237335+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.237571+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.237721+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.237876+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.238047+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.238208+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.238405+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.238521+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.238715+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.239141+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.239297+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.239449+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.239591+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.239710+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.239848+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.240034+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.240178+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.240329+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.240514+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.240738+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.240916+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.241262+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.241422+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.241563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.241679+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.241830+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.242252+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.242405+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.242577+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.242743+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.242916+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.243152+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.243269+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.243337+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.243452+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.243571+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.243794+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.243900+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.244130+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.244302+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.244466+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.244642+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.244917+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:20.245231+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:21.245571+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.245719+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.245931+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.246112+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.246254+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.246466+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.246691+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.246920+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.247097+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.247232+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.247520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.247662+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.247815+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.247975+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.248127+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.248280+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.248550+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.248716+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.248891+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.249044+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.249208+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.249341+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.249525+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.249743+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.249944+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.250254+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.250440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.250670+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.250981+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.251153+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.251372+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.251585+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.251775+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.251912+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.252107+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.252328+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.252538+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.252676+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.252836+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.253006+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.253260+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.253440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.253597+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.253756+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.253930+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.254104+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.254257+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.254418+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.254582+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.254733+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.254898+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5626 writes, 23K keys, 5626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5626 writes, 880 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.255041+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.255228+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.255377+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.255521+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.255659+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.255812+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.255958+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.256146+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.256307+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.256456+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.256686+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.256833+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.257007+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.259307+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.260514+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.263563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.265096+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.266622+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.267622+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.267760+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.268708+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.269021+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.269173+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.269506+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.270038+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.270182+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.270388+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.270695+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.270913+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.271121+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.271339+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.271813+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.272160+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.272345+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.272597+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.272818+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.272987+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.273160+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.273345+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.273567+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.273729+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.273908+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.274090+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.274263+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.274545+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.274770+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.274943+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.275070+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.275219+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.275647+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.275821+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.276011+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.276218+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.276432+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.276734+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.276932+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.277146+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.277407+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.277635+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.277902+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.278174+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.278448+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.278720+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.278963+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.279244+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.279378+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.279522+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.279725+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.279871+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.280055+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.280248+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.280372+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.280572+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.280769+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.281124+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.281313+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.281559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.281783+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.281992+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.282223+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.282431+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.282655+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.282877+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.283100+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.283294+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.283584+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.283784+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.283906+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.284101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.284359+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.284570+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.284771+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.284961+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.158081055s of 599.986328125s, submitted: 106
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.285129+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.285314+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.285537+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.285725+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.285914+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.286033+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.286356+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.286602+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.286864+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.287111+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.287364+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.287674+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.287917+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.288140+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.288398+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.288654+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.288955+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.289108+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.289307+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.289515+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.289674+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.289868+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.290046+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.290229+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.290442+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.290580+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.290780+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.290978+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.291190+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.291382+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.291551+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.291774+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.291928+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.292082+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.292229+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.292365+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.292533+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.292694+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.292868+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.293051+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.293217+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.293347+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.293467+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.293686+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.293787+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.293924+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.294056+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.294254+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.294392+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.297852+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.298032+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.298580+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.299499+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.300158+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.300519+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.301296+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.301463+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.302295+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.302710+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.302953+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.303176+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.303598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.304153+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.304664+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.304909+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.305336+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.305679+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.306013+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.306356+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.306548+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.306830+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.307163+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.307383+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.307599+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.307774+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.308014+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.308241+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.308430+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.308676+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.308922+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.309118+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.309357+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.309565+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.309689+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.309839+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.309976+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.310100+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.310394+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.310600+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.310774+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.310940+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.311085+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.311359+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.311581+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.311761+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.311886+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.312079+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.312268+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.312428+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.312599+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.312781+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.313007+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.313217+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.313435+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.313619+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.313766+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.313880+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.313983+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.314173+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.314385+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.314574+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.314800+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.314954+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.315122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.315235+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.315398+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.316260+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.316460+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.316651+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.316824+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.317026+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.317440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.317901+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.318308+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.318537+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.318788+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.319214+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.319630+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.319885+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.320128+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.320460+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.320819+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.321136+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.321377+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.321623+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.321877+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.322193+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.322604+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.322821+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.323016+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.323243+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.323422+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.323646+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.323878+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.324100+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.324315+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.324561+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.324783+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.324951+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.325115+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.325312+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.325542+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.325732+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.325885+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.326096+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.326302+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.327211+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.327383+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.327543+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.327706+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.327876+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.328089+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.328389+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.328605+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.328792+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.328986+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.329180+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.329408+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.329598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.329784+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.330022+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.330280+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.330597+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.330780+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.331009+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.331196+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.331333+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.331593+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.331787+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.332827+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.332999+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.333206+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.333376+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.333572+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.333718+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.333875+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.334166+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464e5b8000
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.334573+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 handle_osd_map epochs [129,129], i have 127, src has [1,129]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 187.600646973s of 187.992141724s, submitted: 106
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.334825+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.335231+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fca9d000/0x0/0x4ffc00000, data 0xbb90c/0x180000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 130 ms_handle_reset con 0x56464e5b8000 session 0x56464d9e5860
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcec00
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.335455+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 9527296 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.335778+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 131 ms_handle_reset con 0x56464ffcec00 session 0x5646503ea960
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.336101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954681 data_alloc: 218103808 data_used: 217088
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.336401+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc624000/0x0/0x4ffc00000, data 0x52f084/0x5f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.336689+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.336932+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.337137+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc624000/0x0/0x4ffc00000, data 0x52f084/0x5f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.337335+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954841 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.337542+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.337753+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.337995+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.338249+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.338543+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.338693+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.338903+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.339111+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.339318+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.339655+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.339909+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.340170+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.340442+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.341551+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.342078+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.343353+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.344122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.344566+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [3])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.345292+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.345921+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 10
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.346589+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.346834+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.347130+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.347387+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.347520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.347861+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.348012+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.348344+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 11
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.348648+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.348842+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.349095+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.349272+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.349596+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.349879+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.350048+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.350204+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.921653748s of 46.226421356s, submitted: 51
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.350416+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.350573+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.350743+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.350977+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959589 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc61f000/0x0/0x4ffc00000, data 0x5326cd/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.351142+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.351379+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc61f000/0x0/0x4ffc00000, data 0x5326cd/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 sudo[277949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.351628+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.351944+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.352157+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959589 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.353310+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.354297+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.354980+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.712825775s of 11.826250076s, submitted: 30
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.355228+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.355456+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962563 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.355661+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.355994+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.356141+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.356371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.356522+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961683 data_alloc: 218103808 data_used: 221184
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.356683+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.356846+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.357161+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.357330+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.357503+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.357648+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.357929+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.359665+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.359878+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.360087+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.360246+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.360406+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.360598+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.360738+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.360872+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 sudo[277949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.361016+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.769683838s of 22.780818939s, submitted: 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.361171+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.361320+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc619000/0x0/0x4ffc00000, data 0x535d16/0x604000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.361524+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.361708+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966177 data_alloc: 218103808 data_used: 237568
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcf000
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.361899+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 9551872 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc618000/0x0/0x4ffc00000, data 0x535d75/0x605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.362184+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 9551872 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.362384+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 9543680 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.362558+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 9543680 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.362778+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973263 data_alloc: 218103808 data_used: 245760
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.362903+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.363066+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc612000/0x0/0x4ffc00000, data 0x5378d2/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.363230+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.363458+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.499263763s of 12.844394684s, submitted: 39
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 9502720 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.363648+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972255 data_alloc: 218103808 data_used: 245760
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.363817+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53789f/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.363982+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 9486336 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.364178+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.364325+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.364437+0000)
Nov 22 06:01:56 compute-0 sudo[277949]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974151 data_alloc: 218103808 data_used: 245760
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.364617+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc612000/0x0/0x4ffc00000, data 0x53796d/0x60a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.364770+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.365101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.365300+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x537a70/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.939931870s of 10.428670883s, submitted: 23
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.365466+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53790e/0x60a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973367 data_alloc: 218103808 data_used: 245760
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.365623+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 136 handle_osd_map epochs [137,138], i have 136, src has [1,138]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 9445376 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.365779+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 9428992 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.365946+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 8372224 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.366131+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 8372224 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.366353+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc60c000/0x0/0x4ffc00000, data 0x53cc76/0x611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 8364032 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983877 data_alloc: 218103808 data_used: 253952
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.366523+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 8364032 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.366667+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 8355840 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.366851+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8306688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.367050+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc60c000/0x0/0x4ffc00000, data 0x53cd11/0x612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8306688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.367290+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.984500885s of 10.786389351s, submitted: 89
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 7290880 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990059 data_alloc: 218103808 data_used: 262144
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.367454+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 7282688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.367607+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.367773+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x54056a/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.367932+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x5404cf/0x616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.368084+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x5404cf/0x616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 7258112 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994143 data_alloc: 218103808 data_used: 262144
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.368308+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 7241728 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.368593+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.368845+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.369181+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.369371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.349452972s of 10.004354477s, submitted: 108
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998037 data_alloc: 218103808 data_used: 262144
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.369581+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.369729+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.370087+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.370292+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.370552+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.370685+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.370981+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.371159+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.371445+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.371567+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.371724+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.371860+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.372119+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.372371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.372581+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.372813+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.664479256s of 16.246137619s, submitted: 23
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.372990+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.373161+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.373377+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.373633+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.373867+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.374158+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.374394+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fc000/0x0/0x4ffc00000, data 0x5457db/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.374610+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.374826+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003745 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.375107+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.375297+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x5457d9/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.375490+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x5457d9/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.375738+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.375935+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003441 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.376161+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.022244453s of 15.054928780s, submitted: 7
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.376375+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.376623+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.376803+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x545712/0x620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.377008+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002703 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.377208+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.377367+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.377530+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.377733+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x545712/0x620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.377873+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008547 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 6750208 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.378065+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 6750208 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.378264+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.203021049s of 10.306691170s, submitted: 11
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 4472832 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.378583+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 4308992 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.378794+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 3964928 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.379018+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5ad000/0x0/0x4ffc00000, data 0x596977/0x671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016145 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 3784704 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.379286+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 3629056 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.379466+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 3465216 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.379646+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 3293184 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.380686+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 3088384 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.380862+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018071 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1941504 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.381066+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb3a6000/0x0/0x4ffc00000, data 0x5fbde3/0x6d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 598016 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.381244+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.658164978s of 10.665602684s, submitted: 67
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 319488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.381407+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 319488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.381551+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 458752 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.381762+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019981 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb36c000/0x0/0x4ffc00000, data 0x6366ff/0x712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 155648 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.381927+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1064960 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.382084+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 868352 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.382222+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 12
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1040384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.382403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb325000/0x0/0x4ffc00000, data 0x67b94e/0x759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1040384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.382549+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcfc00
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031287 data_alloc: 218103808 data_used: 282624
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1851392 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.382721+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 679936 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.382874+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 507904 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.383107+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x6dac84/0x7b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.196979523s of 10.725030899s, submitted: 63
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 376832 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.383307+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 2064384 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.383534+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb2ab000/0x0/0x4ffc00000, data 0x6f813d/0x7d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035961 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 942080 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.383736+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 942080 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.383919+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 1556480 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.384093+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 1556480 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.384284+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb27c000/0x0/0x4ffc00000, data 0x726a7d/0x802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 1548288 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.384448+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038689 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb244000/0x0/0x4ffc00000, data 0x75e463/0x83a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 1671168 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.384733+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 1581056 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb236000/0x0/0x4ffc00000, data 0x76b99a/0x848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.384980+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 1581056 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.385194+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.385348+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb218000/0x0/0x4ffc00000, data 0x78a573/0x866000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.385550+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.899141312s of 11.591003418s, submitted: 63
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043809 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb20e000/0x0/0x4ffc00000, data 0x79430d/0x870000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.385722+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1794048 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.385864+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb20e000/0x0/0x4ffc00000, data 0x794246/0x86f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2727936 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.386070+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2678784 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.386291+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 2449408 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.386445+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043423 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 2277376 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.386663+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 2269184 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.386862+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 2105344 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.387092+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb1b1000/0x0/0x4ffc00000, data 0x7f1f1e/0x8cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 2441216 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.387314+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 2359296 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.387593+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.556157112s of 10.000567436s, submitted: 51
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb16b000/0x0/0x4ffc00000, data 0x836c5b/0x913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050139 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.387761+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.387917+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.388039+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1630208 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.388235+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 1327104 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.388369+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060111 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 1171456 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.388553+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb12a000/0x0/0x4ffc00000, data 0x8784c4/0x954000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.388732+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.388879+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.389024+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1351680 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.389172+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.619530678s of 10.068819046s, submitted: 44
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063633 data_alloc: 218103808 data_used: 278528
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2138112 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.389356+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2138112 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.389562+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0b8000/0x0/0x4ffc00000, data 0x8eaa2f/0x9c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 1859584 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.389745+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0b8000/0x0/0x4ffc00000, data 0x8eaa2f/0x9c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x8fb6d4/0x9d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87064576 unmapped: 835584 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.389941+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb099000/0x0/0x4ffc00000, data 0x90a4e3/0x9e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 606208 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.390085+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066459 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 253952 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.390268+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 188416 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.390435+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 188416 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.390610+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.390777+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.390915+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067111 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.391055+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.391198+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.391344+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.391582+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.391729+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 145 handle_osd_map epochs [146,146], i have 147, src has [1,147]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.992734909s of 15.773614883s, submitted: 64
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074259 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.391926+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.392107+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 1703936 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.392286+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb054000/0x0/0x4ffc00000, data 0x94a98d/0xa29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.392416+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.392569+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071831 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.392725+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb054000/0x0/0x4ffc00000, data 0x94a98d/0xa29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.392910+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.393076+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.393240+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.393422+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073411 data_alloc: 218103808 data_used: 286720
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.393586+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.393749+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.393932+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.394061+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.394214+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073731 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.394403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.394563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.394682+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.394865+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.394980+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.395137+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073731 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.395280+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.338314056s of 21.669164658s, submitted: 42
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.395568+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.395744+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.395918+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c410/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.396071+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075499 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 ms_handle_reset con 0x56464ffcfc00 session 0x56464f8ab680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.396225+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 13
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.396400+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c410/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.396639+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.396773+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.396944+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.397116+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.397287+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.397425+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.397562+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.397692+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.397799+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.397911+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.398127+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.398274+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.731567383s of 17.763555527s, submitted: 136
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.398437+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075649 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c43d/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.398618+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.398839+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.398999+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.399180+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.399338+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c43b/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.399542+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c43b/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.399713+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.399894+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.400063+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.400209+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.400341+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.400601+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.505785942s of 13.548912048s, submitted: 7
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.400749+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.400926+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.401080+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.401243+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.401393+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.401553+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.401730+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.401945+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.402095+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.402233+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.402449+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.402645+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.402786+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.402997+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.486737251s of 13.497967720s, submitted: 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.403181+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.403383+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.403568+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.403694+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.403861+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.404028+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.404227+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.404383+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.404543+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.404699+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.404889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.406508+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.406864+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.407050+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.407429+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.252617836s of 15.257152557s, submitted: 1
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.407955+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.408124+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c3a5/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7325 writes, 29K keys, 7325 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7325 writes, 1543 syncs, 4.75 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1699 writes, 5316 keys, 1699 commit groups, 1.0 writes per commit group, ingest: 7.07 MB, 0.01 MB/s
                                           Interval WAL: 1699 writes, 663 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.408296+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.408586+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077241 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.408837+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c46b/0xa2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.409022+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:16.409181+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.409360+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.409629+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077113 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c46b/0xa2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.409894+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc ms_handle_reset ms_handle_reset con 0x56464ffd3800
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: get_auth_request con 0x56465051d000 auth_method 0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.410075+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.410279+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.410462+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.410673+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076375 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c3a5/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.410840+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.411047+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 ms_handle_reset con 0x56464ffce400 session 0x56464da092c0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x564651636000
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.411318+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 999424 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.098913193s of 17.159513474s, submitted: 11
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.411509+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 925696 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.411739+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083361 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 925696 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.411890+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb033000/0x0/0x4ffc00000, data 0x9696cb/0xa4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 811008 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.412029+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 794624 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.412180+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88170496 unmapped: 778240 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.412325+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.412526+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085661 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.412666+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafff000/0x0/0x4ffc00000, data 0x99e5f5/0xa7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.412784+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 1695744 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.412935+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 1695744 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.479039192s of 10.263687134s, submitted: 42
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.413037+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 1540096 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.413279+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090171 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.413425+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.413669+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafce000/0x0/0x4ffc00000, data 0x9d01c7/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.413868+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafce000/0x0/0x4ffc00000, data 0x9d01c7/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.414095+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 1802240 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.414309+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088099 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 1802240 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafb4000/0x0/0x4ffc00000, data 0x9ea131/0xaca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.414455+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.414698+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.414939+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.607292175s of 10.000217438s, submitted: 7
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.415086+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.415312+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088335 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.415558+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.415755+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.415924+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.416143+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.416326+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088335 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.416546+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.416826+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.417054+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.417238+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.095285416s of 10.118075371s, submitted: 5
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xa0788e/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.417411+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090539 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.417586+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.417853+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.418048+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xa0788e/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.418231+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.418405+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0xa1db4a/0xaff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092351 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.418552+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 1409024 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.418672+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 2670592 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 22 06:01:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649733032' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.418925+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf4f000/0x0/0x4ffc00000, data 0xa4c367/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 2670592 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.419055+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 2564096 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.419252+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099679 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 2408448 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.348463058s of 11.521899223s, submitted: 32
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf3b000/0x0/0x4ffc00000, data 0xa5f9ac/0xb42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,2,1])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.419391+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 999424 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.419624+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 999424 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.419776+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 827392 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.419949+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 1007616 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.420084+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102317 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 1007616 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.420206+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faedf000/0x0/0x4ffc00000, data 0xabd3f8/0xb9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 1064960 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.420403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 1392640 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.420968+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 2179072 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.421122+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae8a000/0x0/0x4ffc00000, data 0xb10d52/0xbf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.421281+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae8a000/0x0/0x4ffc00000, data 0xb10d52/0xbf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107149 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.729861259s of 10.001068115s, submitted: 57
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.421433+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.421563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 548864 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.421709+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 548864 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.421835+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 385024 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.422176+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109021 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2121728 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.422391+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae26000/0x0/0x4ffc00000, data 0xb76964/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2121728 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.422611+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2097152 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.422851+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2097152 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.422974+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2080768 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.423162+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa9cd000/0x0/0x4ffc00000, data 0xbbf591/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116113 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2080768 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa9cd000/0x0/0x4ffc00000, data 0xbbf591/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.423302+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.608607292s of 10.946014404s, submitted: 51
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93241344 unmapped: 1998848 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.423450+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92717056 unmapped: 2523136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.423667+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93880320 unmapped: 1359872 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.423799+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93888512 unmapped: 2400256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.423955+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115413 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93888512 unmapped: 2400256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.424115+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa96d000/0x0/0x4ffc00000, data 0xc1f447/0xd01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2244608 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.424265+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1982464 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.424501+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1982464 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.424652+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa93b000/0x0/0x4ffc00000, data 0xc51e55/0xd33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94625792 unmapped: 1662976 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.424833+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121513 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 2506752 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.425021+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8ea000/0x0/0x4ffc00000, data 0xca2fa3/0xd84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 2449408 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.425181+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 2449408 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.425352+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.250131607s of 11.549646378s, submitted: 67
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.425521+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 1236992 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.425723+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136145 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 1073152 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.425874+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1040384 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.426000+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa873000/0x0/0x4ffc00000, data 0xd17636/0xdfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95322112 unmapped: 2015232 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.426127+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 1966080 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.426299+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.426412+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132769 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.426623+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa83f000/0x0/0x4ffc00000, data 0xd4c576/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.426767+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2408448 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.426959+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.427200+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.427414+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132079 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.240543365s of 12.828203201s, submitted: 151
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.427780+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.427971+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.428314+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.428465+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2392064 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.428621+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129741 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2392064 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.428808+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.428989+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.429108+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.429295+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.429566+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129741 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.429794+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.429941+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.993387222s of 12.013343811s, submitted: 2
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.430169+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.430387+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.430670+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa842000/0x0/0x4ffc00000, data 0xd4c3a5/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129051 data_alloc: 218103808 data_used: 294912
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.430864+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.431028+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa842000/0x0/0x4ffc00000, data 0xd4c3a5/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.431215+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.431389+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.431557+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134993 data_alloc: 218103808 data_used: 303104
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.431754+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.431964+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.432214+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0xd4e026/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.432371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.432564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134993 data_alloc: 218103808 data_used: 303104
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.432737+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0xd4e026/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.432924+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.021748543s of 14.105439186s, submitted: 21
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.433064+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.433214+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.433368+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132345 data_alloc: 218103808 data_used: 303104
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.433564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83f000/0x0/0x4ffc00000, data 0xd4df8b/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.433730+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.433898+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x564651636400
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 14
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.434047+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 2285568 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.434246+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139175 data_alloc: 218103808 data_used: 311296
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 2285568 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.434412+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb9a/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95076352 unmapped: 2260992 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.434570+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb20/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.434799+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.434960+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.435226+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139479 data_alloc: 218103808 data_used: 323584
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.435416+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.435629+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.750409126s of 15.925973892s, submitted: 16
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.435872+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb20/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.436059+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.436187+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136861 data_alloc: 218103808 data_used: 319488
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.436385+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.436548+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.436691+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83c000/0x0/0x4ffc00000, data 0xd4f9ee/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.436859+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.436985+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141035 data_alloc: 218103808 data_used: 327680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.437164+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.437271+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.437454+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.438717+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.438896+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141035 data_alloc: 218103808 data_used: 327680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.439076+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.439253+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.859284401s of 15.013036728s, submitted: 41
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.439417+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.439563+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.439672+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144009 data_alloc: 218103808 data_used: 327680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.439829+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa835000/0x0/0x4ffc00000, data 0xd53087/0xe38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.439974+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 2211840 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.440130+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.440258+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.440424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145777 data_alloc: 218103808 data_used: 327680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.440552+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.440714+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.440863+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.440983+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.441135+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145777 data_alloc: 218103808 data_used: 327680
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.441321+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.441454+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.953479767s of 14.017258644s, submitted: 14
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.441671+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.441803+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0xd53087/0xe38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.441945+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa832000/0x0/0x4ffc00000, data 0xd54c6d/0xe3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149071 data_alloc: 218103808 data_used: 335872
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.442095+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.442266+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0xd54d08/0xe3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.442424+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.442561+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.442731+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 2211840 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154243 data_alloc: 218103808 data_used: 344064
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.442909+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 2203648 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58312/0xe41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58312/0xe41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.443057+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.443255+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.443514+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.443718+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.397949219s of 13.738856316s, submitted: 53
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156227 data_alloc: 218103808 data_used: 344064
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.443871+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.444132+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58519/0xe43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.444506+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.444702+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.444889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa827000/0x0/0x4ffc00000, data 0xd5a13f/0xe46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163855 data_alloc: 218103808 data_used: 356352
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.445352+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa824000/0x0/0x4ffc00000, data 0xd5badd/0xe48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.445720+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.446419+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.446623+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.447076+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0xd5bbef/0xe49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.447410+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163541 data_alloc: 218103808 data_used: 360448
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.204563141s of 10.300806046s, submitted: 34
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.447566+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.447867+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa826000/0x0/0x4ffc00000, data 0xd5b9e7/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.448037+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.448286+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa826000/0x0/0x4ffc00000, data 0xd5b9e7/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.448450+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 2121728 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.448632+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 2121728 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.448803+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95223808 unmapped: 2113536 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.449003+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.449172+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.449404+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.449570+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.449801+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.449967+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.450163+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.450427+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.450604+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.450773+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.450968+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.451172+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.451393+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.451695+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.451889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.452068+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 2088960 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.776588440s of 23.795372009s, submitted: 13
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.452191+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.452350+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168071 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa820000/0x0/0x4ffc00000, data 0xd5f050/0xe4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.452634+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.452862+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.453022+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.453160+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.453302+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168071 data_alloc: 218103808 data_used: 364544
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 2072576 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa820000/0x0/0x4ffc00000, data 0xd5f050/0xe4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.453564+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.453786+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.453993+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 159 handle_osd_map epochs [160,161], i have 159, src has [1,161]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.872550011s of 10.118376732s, submitted: 39
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.454136+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 2031616 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.454281+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180155 data_alloc: 218103808 data_used: 376832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2007040 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.454458+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa817000/0x0/0x4ffc00000, data 0xd62886/0xe56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2007040 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.454658+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1974272 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.454784+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1974272 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.454919+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa815000/0x0/0x4ffc00000, data 0xd643d1/0xe58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.455079+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179967 data_alloc: 218103808 data_used: 376832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.455187+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.455316+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa816000/0x0/0x4ffc00000, data 0xd64336/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.455578+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa816000/0x0/0x4ffc00000, data 0xd643d1/0xe58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.455754+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.599443436s of 10.774303436s, submitted: 54
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.455896+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185029 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.456087+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.456252+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.456403+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0xd65d99/0xe5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.456544+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 1916928 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.456715+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187137 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 1916928 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.456861+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.457133+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.457323+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.457535+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 1900544 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.457704+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187137 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359882355s of 10.512098312s, submitted: 50
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0xd69432/0xe60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.457920+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.458129+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0xd69397/0xe5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.458280+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.458440+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 165 handle_osd_map epochs [166,167], i have 165, src has [1,167]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.458603+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195897 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 827392 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.458785+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 827392 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.458907+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 819200 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.459048+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa807000/0x0/0x4ffc00000, data 0xd6cc2e/0xe66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 819200 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.459208+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 811008 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.459373+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197665 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 811008 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.567685127s of 10.729619026s, submitted: 50
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.459524+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.459769+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.459883+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.460006+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fa804000/0x0/0x4ffc00000, data 0xd6e6b1/0xe69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.460134+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201703 data_alloc: 218103808 data_used: 385024
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.460338+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.460461+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.460636+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.460845+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 168 ms_handle_reset con 0x564651636400 session 0x56464e6b4780
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.461000+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 15
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.461104+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.461271+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.461461+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.461633+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.461793+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.461975+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.462158+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.462338+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.462624+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.462777+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.462978+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.953063965s of 20.050271988s, submitted: 204
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.463205+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.463384+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.463559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.463709+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205513 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.463859+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.464076+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.464266+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.464408+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.464785+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205513 data_alloc: 218103808 data_used: 393216
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.464969+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.465191+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.465520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.465724+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.465938+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.466081+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.466282+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.466452+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.466703+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.466889+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.467149+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.467446+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.467584+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.467725+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.467910+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.468045+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.468190+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.468414+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.468613+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.468804+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.468982+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.469206+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.469363+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.469632+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.470056+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.470397+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.470663+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.470895+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.471869+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.472156+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.472347+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.472549+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.472911+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.473333+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.473580+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.473721+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.474080+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.474346+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.474587+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.334320068s of 48.347373962s, submitted: 15
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 ms_handle_reset con 0x56464ffcf000 session 0x5646504241e0
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.474845+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 16
Nov 22 06:01:56 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.475010+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.475310+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.475616+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.475854+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.476014+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.476152+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.476367+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.476520+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.476806+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.477046+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.477211+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.477530+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.477758+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.477972+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.478217+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.478371+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.478600+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.479195+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.480175+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.480743+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.480862+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.481101+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.481362+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.481559+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.481711+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.481906+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.482026+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.482142+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.482270+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.482399+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.482540+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.482703+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:17.482813+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:18.482956+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:19.483108+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:20.483274+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:21.483393+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:22.483549+0000)
Nov 22 06:01:56 compute-0 sudo[277974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:23.483675+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}'
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98246656 unmapped: 1187840 heap: 99434496 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:24.483853+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:01:56 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:01:56 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98336768 unmapped: 2146304 heap: 100483072 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:01:56 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:25.483977+0000)
Nov 22 06:01:56 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:01:56 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 2097152 heap: 100483072 old mem: 2845415832 new mem: 2845415832
Nov 22 06:01:56 compute-0 ceph-osd[89779]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:01:56 compute-0 sudo[277974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:56 compute-0 sudo[277974]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:56 compute-0 sudo[278005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:56 compute-0 sudo[278005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:56 compute-0 sudo[278005]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:56 compute-0 sudo[278032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:01:56 compute-0 sudo[278032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 06:01:56 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 06:01:56 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.101746218 +0000 UTC m=+0.061276011 container create 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:01:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 22 06:01:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454167986' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 06:01:57 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2334916369' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:01:57 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2649733032' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 06:01:57 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 06:01:57 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 06:01:57 compute-0 ceph-mon[75840]: pgmap v1276: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:57 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/454167986' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 06:01:57 compute-0 systemd[1]: Started libpod-conmon-0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9.scope.
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.061893892 +0000 UTC m=+0.021423715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.175960075 +0000 UTC m=+0.135489888 container init 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.182820748 +0000 UTC m=+0.142350551 container start 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 06:01:57 compute-0 blissful_torvalds[278167]: 167 167
Nov 22 06:01:57 compute-0 systemd[1]: libpod-0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9.scope: Deactivated successfully.
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.18888614 +0000 UTC m=+0.148415933 container attach 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.189152518 +0000 UTC m=+0.148682301 container died 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 06:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7377a9aba3dab1497844ecd7bcb8b063f9555f4974530fa971c2d50ae7c15f7f-merged.mount: Deactivated successfully.
Nov 22 06:01:57 compute-0 podman[278146]: 2025-11-22 06:01:57.223935948 +0000 UTC m=+0.183465751 container remove 0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 06:01:57 compute-0 systemd[1]: libpod-conmon-0fbbe4be05fd5d238ffd29c01ce202984bd7d9740a1b66fadd17b3a662a46da9.scope: Deactivated successfully.
Nov 22 06:01:57 compute-0 podman[278215]: 2025-11-22 06:01:57.388649417 +0000 UTC m=+0.044966634 container create e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 06:01:57 compute-0 systemd[1]: Started libpod-conmon-e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e.scope.
Nov 22 06:01:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45750a95b68e8084a7d4d98157f29484fb10615dfec17f3812701458dfff5351/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45750a95b68e8084a7d4d98157f29484fb10615dfec17f3812701458dfff5351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45750a95b68e8084a7d4d98157f29484fb10615dfec17f3812701458dfff5351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:57 compute-0 podman[278215]: 2025-11-22 06:01:57.367910942 +0000 UTC m=+0.024228259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45750a95b68e8084a7d4d98157f29484fb10615dfec17f3812701458dfff5351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:01:57 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14665 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:57 compute-0 podman[278215]: 2025-11-22 06:01:57.473541179 +0000 UTC m=+0.129858476 container init e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 06:01:57 compute-0 podman[278215]: 2025-11-22 06:01:57.482872588 +0000 UTC m=+0.139189845 container start e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:01:57 compute-0 podman[278215]: 2025-11-22 06:01:57.487251056 +0000 UTC m=+0.143568313 container attach e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 06:01:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 22 06:01:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136944748' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 06:01:58 compute-0 ceph-mon[75840]: from='client.14665 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:58 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1136944748' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 06:01:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 22 06:01:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4238281323' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 06:01:58 compute-0 infallible_nash[278234]: {
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_id": 1,
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "type": "bluestore"
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     },
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_id": 2,
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "type": "bluestore"
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     },
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_id": 0,
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:01:58 compute-0 infallible_nash[278234]:         "type": "bluestore"
Nov 22 06:01:58 compute-0 infallible_nash[278234]:     }
Nov 22 06:01:58 compute-0 infallible_nash[278234]: }
Nov 22 06:01:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:01:58 compute-0 systemd[1]: libpod-e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e.scope: Deactivated successfully.
Nov 22 06:01:58 compute-0 podman[278215]: 2025-11-22 06:01:58.445896422 +0000 UTC m=+1.102213679 container died e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-45750a95b68e8084a7d4d98157f29484fb10615dfec17f3812701458dfff5351-merged.mount: Deactivated successfully.
Nov 22 06:01:58 compute-0 podman[278215]: 2025-11-22 06:01:58.512908016 +0000 UTC m=+1.169225243 container remove e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 06:01:58 compute-0 systemd[1]: libpod-conmon-e89c779f770849f2259cb7928964972a9870c04a41ce86439301845251f10b9e.scope: Deactivated successfully.
Nov 22 06:01:58 compute-0 sudo[278032]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:01:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:01:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c3fed511-ce0c-47f3-8cbb-67db63990353 does not exist
Nov 22 06:01:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8b74a2eb-7d29-4adb-8cfc-fdb4ba9cf958 does not exist
Nov 22 06:01:58 compute-0 sudo[278405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:01:58 compute-0 sudo[278405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:58 compute-0 sudo[278405]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:58 compute-0 sudo[278430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:01:58 compute-0 sudo[278430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:01:58 compute-0 sudo[278430]: pam_unix(sudo:session): session closed for user root
Nov 22 06:01:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 22 06:01:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554179606' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 06:01:58 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 06:01:58 compute-0 systemd[1]: Started Hostname Service.
Nov 22 06:01:58 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 22 06:01:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3233315096' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 06:01:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4238281323' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 06:01:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:01:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2554179606' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 06:01:59 compute-0 ceph-mon[75840]: pgmap v1277: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:01:59 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3233315096' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 06:01:59 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14675 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:01:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 22 06:01:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271906885' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 22 06:02:00 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927218790' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mon[75840]: from='client.14675 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3271906885' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2927218790' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14681 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:00 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:01 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 22 06:02:01 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936727375' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 06:02:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14685 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:01 compute-0 ceph-mon[75840]: from='client.14681 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:01 compute-0 ceph-mon[75840]: pgmap v1278: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:01 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2936727375' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 06:02:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14687 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 22 06:02:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1904771330' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 22 06:02:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826410941' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: from='client.14685 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: from='client.14687 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1904771330' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2826410941' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 06:02:02 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14693 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14695 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:03 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:02:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 22 06:02:03 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611780512' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 06:02:03 compute-0 ceph-mon[75840]: pgmap v1279: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:03 compute-0 ceph-mon[75840]: from='client.14693 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 22 06:02:04 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647054246' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14701 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mon[75840]: from='client.14695 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/611780512' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3647054246' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 06:02:04 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:05 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14703 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 06:02:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3288972514' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:02:05 compute-0 podman[279468]: 2025-11-22 06:02:05.837568568 +0000 UTC m=+0.107488547 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 06:02:05 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 22 06:02:05 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/594627849' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 06:02:05 compute-0 ceph-mon[75840]: from='client.14701 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:05 compute-0 ceph-mon[75840]: pgmap v1280: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:05 compute-0 ceph-mon[75840]: from='client.14703 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:02:05 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3288972514' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:02:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 22 06:02:06 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3300576157' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14711 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:06 compute-0 ovs-appctl[279995]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 22 06:02:06 compute-0 ovs-appctl[279999]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 22 06:02:06 compute-0 ovs-appctl[280005]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 22 06:02:06 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/594627849' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 06:02:06 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3300576157' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:06 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 06:02:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335577126' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 22 06:02:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2574737107' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 22 06:02:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401308611' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: from='client.14711 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: pgmap v1281: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2335577126' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2574737107' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 22 06:02:07 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3401308611' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 22 06:02:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195872164' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 22 06:02:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14721 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2195872164' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 22 06:02:08 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 22 06:02:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31528563' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 22 06:02:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 22 06:02:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729202032' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14727 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:09 compute-0 ceph-mon[75840]: from='client.14721 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:09 compute-0 ceph-mon[75840]: pgmap v1282: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/31528563' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 22 06:02:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2729202032' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 22 06:02:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/677164084' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mon[75840]: from='client.14727 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/677164084' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:10 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 22 06:02:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174816431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 22 06:02:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294295317' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:11 compute-0 ceph-mon[75840]: from='client.14731 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:11 compute-0 ceph-mon[75840]: from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:11 compute-0 ceph-mon[75840]: pgmap v1283: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/174816431' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 22 06:02:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/294295317' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14741 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:02:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 06:02:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475685810' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:02:12 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:12 compute-0 ceph-mon[75840]: from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:12 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/475685810' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:02:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 22 06:02:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2452201248' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 22 06:02:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14747 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:14 compute-0 ceph-mon[75840]: from='client.14741 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:14 compute-0 ceph-mon[75840]: pgmap v1284: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:14 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2452201248' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 22 06:02:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 06:02:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011875996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 22 06:02:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742125204' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:14 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:15 compute-0 ceph-mon[75840]: from='client.14747 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:15 compute-0 ceph-mon[75840]: from='client.14749 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:02:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2011875996' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2742125204' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 22 06:02:15 compute-0 podman[281464]: 2025-11-22 06:02:15.216706656 +0000 UTC m=+0.071012662 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 06:02:15 compute-0 podman[281463]: 2025-11-22 06:02:15.22355516 +0000 UTC m=+0.072865192 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:02:16 compute-0 ceph-mon[75840]: pgmap v1285: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:16 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 06:02:16 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:17 compute-0 ceph-mon[75840]: pgmap v1286: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.162 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.162 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.163 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:02:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:02:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724292503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.626 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:02:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1724292503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.848 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.850 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4773MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.850 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.850 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.950 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.951 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:02:18 compute-0 nova_compute[255660]: 2025-11-22 06:02:18.965 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:02:18 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:02:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1506516582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:02:19 compute-0 nova_compute[255660]: 2025-11-22 06:02:19.448 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:02:19 compute-0 nova_compute[255660]: 2025-11-22 06:02:19.455 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:02:19 compute-0 nova_compute[255660]: 2025-11-22 06:02:19.487 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:02:19 compute-0 nova_compute[255660]: 2025-11-22 06:02:19.490 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:02:19 compute-0 nova_compute[255660]: 2025-11-22 06:02:19.491 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:02:19 compute-0 ceph-mon[75840]: pgmap v1287: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1506516582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:02:20 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 06:02:20 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 06:02:20 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:22 compute-0 ceph-mon[75840]: pgmap v1288: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:22 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:24 compute-0 ceph-mon[75840]: pgmap v1289: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:24 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:25 compute-0 ceph-mon[75840]: pgmap v1290: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:26 compute-0 nova_compute[255660]: 2025-11-22 06:02:26.492 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:26 compute-0 nova_compute[255660]: 2025-11-22 06:02:26.493 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:26 compute-0 nova_compute[255660]: 2025-11-22 06:02:26.493 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:02:26 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:28 compute-0 ceph-mon[75840]: pgmap v1291: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:28 compute-0 nova_compute[255660]: 2025-11-22 06:02:28.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:28 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:29 compute-0 nova_compute[255660]: 2025-11-22 06:02:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:29 compute-0 ceph-mon[75840]: pgmap v1292: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:30 compute-0 nova_compute[255660]: 2025-11-22 06:02:30.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:30 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:31 compute-0 nova_compute[255660]: 2025-11-22 06:02:31.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:31 compute-0 nova_compute[255660]: 2025-11-22 06:02:31.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:32 compute-0 ceph-mon[75840]: pgmap v1293: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:32 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:33 compute-0 sshd-session[282018]: Invalid user validator from 80.94.92.166 port 54332
Nov 22 06:02:33 compute-0 nova_compute[255660]: 2025-11-22 06:02:33.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:02:33 compute-0 nova_compute[255660]: 2025-11-22 06:02:33.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:02:33 compute-0 nova_compute[255660]: 2025-11-22 06:02:33.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:02:33 compute-0 nova_compute[255660]: 2025-11-22 06:02:33.149 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:02:33 compute-0 sshd-session[282018]: Connection closed by invalid user validator 80.94.92.166 port 54332 [preauth]
Nov 22 06:02:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:34 compute-0 ceph-mon[75840]: pgmap v1294: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:34 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:35 compute-0 ceph-mon[75840]: pgmap v1295: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:36 compute-0 podman[282021]: 2025-11-22 06:02:36.232061288 +0000 UTC m=+0.091506520 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:02:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:02:36.943 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:02:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:02:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:02:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:02:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:02:36 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:38 compute-0 ceph-mon[75840]: pgmap v1296: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:38 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:39 compute-0 ceph-mon[75840]: pgmap v1297: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:40 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:42 compute-0 ceph-mon[75840]: pgmap v1298: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:42 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:02:43
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr']
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:02:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:02:44 compute-0 ceph-mon[75840]: pgmap v1299: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:02:44 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:45 compute-0 ceph-mon[75840]: pgmap v1300: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:45 compute-0 sudo[274349]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:45 compute-0 sshd-session[274348]: Received disconnect from 192.168.122.10 port 56488:11: disconnected by user
Nov 22 06:02:45 compute-0 sshd-session[274348]: Disconnected from user zuul 192.168.122.10 port 56488
Nov 22 06:02:45 compute-0 sshd-session[274344]: pam_unix(sshd:session): session closed for user zuul
Nov 22 06:02:45 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Nov 22 06:02:45 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 22 06:02:45 compute-0 systemd[1]: session-51.scope: Consumed 2min 37.365s CPU time, 772.7M memory peak, read 311.4M from disk, written 92.5M to disk.
Nov 22 06:02:45 compute-0 systemd-logind[798]: Removed session 51.
Nov 22 06:02:45 compute-0 sshd-session[282049]: Accepted publickey for zuul from 192.168.122.10 port 46814 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 06:02:45 compute-0 systemd-logind[798]: New session 52 of user zuul.
Nov 22 06:02:45 compute-0 podman[282050]: 2025-11-22 06:02:45.773085658 +0000 UTC m=+0.098222990 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 22 06:02:45 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 22 06:02:45 compute-0 podman[282048]: 2025-11-22 06:02:45.778447422 +0000 UTC m=+0.105805454 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 06:02:45 compute-0 sshd-session[282049]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 06:02:45 compute-0 sudo[282087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-22-ueetkww.tar.xz
Nov 22 06:02:45 compute-0 sudo[282087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 06:02:45 compute-0 sudo[282087]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:45 compute-0 sshd-session[282086]: Received disconnect from 192.168.122.10 port 46814:11: disconnected by user
Nov 22 06:02:45 compute-0 sshd-session[282086]: Disconnected from user zuul 192.168.122.10 port 46814
Nov 22 06:02:45 compute-0 sshd-session[282049]: pam_unix(sshd:session): session closed for user zuul
Nov 22 06:02:45 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 22 06:02:45 compute-0 systemd-logind[798]: Session 52 logged out. Waiting for processes to exit.
Nov 22 06:02:45 compute-0 systemd-logind[798]: Removed session 52.
Nov 22 06:02:46 compute-0 sshd-session[282112]: Accepted publickey for zuul from 192.168.122.10 port 46816 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 06:02:46 compute-0 systemd-logind[798]: New session 53 of user zuul.
Nov 22 06:02:46 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 22 06:02:46 compute-0 sshd-session[282112]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 06:02:46 compute-0 sudo[282116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 22 06:02:46 compute-0 sudo[282116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 06:02:46 compute-0 sudo[282116]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:46 compute-0 sshd-session[282115]: Received disconnect from 192.168.122.10 port 46816:11: disconnected by user
Nov 22 06:02:46 compute-0 sshd-session[282115]: Disconnected from user zuul 192.168.122.10 port 46816
Nov 22 06:02:46 compute-0 sshd-session[282112]: pam_unix(sshd:session): session closed for user zuul
Nov 22 06:02:46 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 22 06:02:46 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Nov 22 06:02:46 compute-0 systemd-logind[798]: Removed session 53.
Nov 22 06:02:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:02:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1626707867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:02:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:02:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1626707867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:02:48 compute-0 ceph-mon[75840]: pgmap v1301: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1626707867' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:02:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/1626707867' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:02:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:50 compute-0 ceph-mon[75840]: pgmap v1302: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:51 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 06:02:51 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 06:02:52 compute-0 ceph-mon[75840]: pgmap v1303: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:02:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:02:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:54 compute-0 ceph-mon[75840]: pgmap v1304: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:56 compute-0 ceph-mon[75840]: pgmap v1305: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:58 compute-0 ceph-mon[75840]: pgmap v1306: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:02:58 compute-0 sudo[282145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:02:58 compute-0 sudo[282145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:58 compute-0 sudo[282145]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:58 compute-0 sudo[282170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:02:58 compute-0 sudo[282170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:58 compute-0 sudo[282170]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:58 compute-0 sudo[282195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:02:58 compute-0 sudo[282195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:58 compute-0 sudo[282195]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:58 compute-0 sudo[282220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:02:58 compute-0 sudo[282220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:02:59 compute-0 sudo[282220]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:02:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 149e4e49-bd40-4fcb-b078-3518ba50f2f5 does not exist
Nov 22 06:02:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 0ebf8a6a-0c95-4f5b-bb1f-afed17e58b00 does not exist
Nov 22 06:02:59 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 301ed263-8517-4f11-83f8-07534bf58b71 does not exist
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:02:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:02:59 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:02:59 compute-0 sudo[282277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:02:59 compute-0 sudo[282277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:59 compute-0 sudo[282277]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:59 compute-0 sudo[282302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:02:59 compute-0 sudo[282302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:59 compute-0 sudo[282302]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:59 compute-0 sudo[282327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:02:59 compute-0 sudo[282327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:02:59 compute-0 sudo[282327]: pam_unix(sudo:session): session closed for user root
Nov 22 06:02:59 compute-0 sudo[282352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:02:59 compute-0 sudo[282352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:00 compute-0 ceph-mon[75840]: pgmap v1307: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:03:00 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.400378083 +0000 UTC m=+0.076414386 container create 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 06:03:00 compute-0 systemd[1]: Started libpod-conmon-55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7.scope.
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.365964802 +0000 UTC m=+0.042001155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.500297337 +0000 UTC m=+0.176333640 container init 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.508801405 +0000 UTC m=+0.184837678 container start 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.512459793 +0000 UTC m=+0.188496096 container attach 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 06:03:00 compute-0 infallible_robinson[282434]: 167 167
Nov 22 06:03:00 compute-0 systemd[1]: libpod-55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7.scope: Deactivated successfully.
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.518232017 +0000 UTC m=+0.194268320 container died 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8310c570bdd6b656e710a629eda1b676b2e6c3375233373e6d75491fc69810bf-merged.mount: Deactivated successfully.
Nov 22 06:03:00 compute-0 podman[282417]: 2025-11-22 06:03:00.566613632 +0000 UTC m=+0.242649905 container remove 55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 06:03:00 compute-0 systemd[1]: libpod-conmon-55f09b56d2bb5fc924526d5451bc5fb5b1d23f4b4f9f981f676e2585980c94d7.scope: Deactivated successfully.
Nov 22 06:03:00 compute-0 podman[282458]: 2025-11-22 06:03:00.786708093 +0000 UTC m=+0.063747548 container create 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:03:00 compute-0 systemd[1]: Started libpod-conmon-2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76.scope.
Nov 22 06:03:00 compute-0 podman[282458]: 2025-11-22 06:03:00.756264448 +0000 UTC m=+0.033303963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:00 compute-0 podman[282458]: 2025-11-22 06:03:00.899869322 +0000 UTC m=+0.176908817 container init 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:03:00 compute-0 podman[282458]: 2025-11-22 06:03:00.915371536 +0000 UTC m=+0.192410981 container start 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:03:00 compute-0 podman[282458]: 2025-11-22 06:03:00.919191988 +0000 UTC m=+0.196231433 container attach 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:03:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:01 compute-0 ceph-mon[75840]: pgmap v1308: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:01 compute-0 determined_benz[282474]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:03:01 compute-0 determined_benz[282474]: --> relative data size: 1.0
Nov 22 06:03:01 compute-0 determined_benz[282474]: --> All data devices are unavailable
Nov 22 06:03:02 compute-0 systemd[1]: libpod-2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76.scope: Deactivated successfully.
Nov 22 06:03:02 compute-0 podman[282458]: 2025-11-22 06:03:02.000538289 +0000 UTC m=+1.277577704 container died 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 06:03:02 compute-0 systemd[1]: libpod-2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76.scope: Consumed 1.023s CPU time.
Nov 22 06:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8ae91b21be222961e1de02528a05448dfa5158525092faa472bc0879c2d7438-merged.mount: Deactivated successfully.
Nov 22 06:03:02 compute-0 podman[282458]: 2025-11-22 06:03:02.094130434 +0000 UTC m=+1.371169889 container remove 2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 06:03:02 compute-0 systemd[1]: libpod-conmon-2b565d536e1997b07a42131c928efea2abc5a56cb6be9fe42af0a7115a668e76.scope: Deactivated successfully.
Nov 22 06:03:02 compute-0 sudo[282352]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:02 compute-0 sudo[282517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:03:02 compute-0 sudo[282517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:02 compute-0 sudo[282517]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:02 compute-0 sudo[282542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:03:02 compute-0 sudo[282542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:02 compute-0 sudo[282542]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:02 compute-0 sudo[282567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:03:02 compute-0 sudo[282567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:02 compute-0 sudo[282567]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:02 compute-0 sudo[282592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:03:02 compute-0 sudo[282592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.846047978 +0000 UTC m=+0.070954481 container create cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:03:02 compute-0 systemd[1]: Started libpod-conmon-cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e.scope.
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.815120249 +0000 UTC m=+0.040026842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.948845559 +0000 UTC m=+0.173752092 container init cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.955314302 +0000 UTC m=+0.180220785 container start cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 06:03:02 compute-0 hardcore_jemison[282675]: 167 167
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.958433616 +0000 UTC m=+0.183340149 container attach cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 06:03:02 compute-0 systemd[1]: libpod-cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e.scope: Deactivated successfully.
Nov 22 06:03:02 compute-0 podman[282659]: 2025-11-22 06:03:02.961357594 +0000 UTC m=+0.186264087 container died cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 06:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3affe88c198af26e8cb4166b977db56af161808d7747b68700b2aed0ee257ef-merged.mount: Deactivated successfully.
Nov 22 06:03:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:03 compute-0 podman[282659]: 2025-11-22 06:03:03.008915867 +0000 UTC m=+0.233822360 container remove cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 06:03:03 compute-0 systemd[1]: libpod-conmon-cd080e53fcf01c76a10f2127f99ff110895b1a7950da2bd7d7407cb8a2a32d3e.scope: Deactivated successfully.
Nov 22 06:03:03 compute-0 podman[282699]: 2025-11-22 06:03:03.243036042 +0000 UTC m=+0.048992692 container create 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 06:03:03 compute-0 systemd[1]: Started libpod-conmon-05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90.scope.
Nov 22 06:03:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:03 compute-0 podman[282699]: 2025-11-22 06:03:03.227834095 +0000 UTC m=+0.033790775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb062c6b3e3b533b084be2276ebc2dd8efaee511247c783ec78f51878be35b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb062c6b3e3b533b084be2276ebc2dd8efaee511247c783ec78f51878be35b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb062c6b3e3b533b084be2276ebc2dd8efaee511247c783ec78f51878be35b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eb062c6b3e3b533b084be2276ebc2dd8efaee511247c783ec78f51878be35b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:03 compute-0 podman[282699]: 2025-11-22 06:03:03.350884598 +0000 UTC m=+0.156841338 container init 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:03:03 compute-0 podman[282699]: 2025-11-22 06:03:03.360172447 +0000 UTC m=+0.166129127 container start 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:03:03 compute-0 podman[282699]: 2025-11-22 06:03:03.364553094 +0000 UTC m=+0.170509784 container attach 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 06:03:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:04 compute-0 ceph-mon[75840]: pgmap v1309: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]: {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     "0": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "devices": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "/dev/loop3"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             ],
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_name": "ceph_lv0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_size": "21470642176",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "name": "ceph_lv0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "tags": {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_name": "ceph",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.crush_device_class": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.encrypted": "0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_id": "0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.vdo": "0"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             },
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "vg_name": "ceph_vg0"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         }
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     ],
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     "1": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "devices": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "/dev/loop4"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             ],
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_name": "ceph_lv1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_size": "21470642176",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "name": "ceph_lv1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "tags": {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_name": "ceph",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.crush_device_class": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.encrypted": "0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_id": "1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.vdo": "0"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             },
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "vg_name": "ceph_vg1"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         }
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     ],
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     "2": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "devices": [
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "/dev/loop5"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             ],
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_name": "ceph_lv2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_size": "21470642176",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "name": "ceph_lv2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "tags": {
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.cluster_name": "ceph",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.crush_device_class": "",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.encrypted": "0",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osd_id": "2",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:                 "ceph.vdo": "0"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             },
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "type": "block",
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:             "vg_name": "ceph_vg2"
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:         }
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]:     ]
Nov 22 06:03:04 compute-0 pedantic_lalande[282716]: }
Nov 22 06:03:04 compute-0 systemd[1]: libpod-05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90.scope: Deactivated successfully.
Nov 22 06:03:04 compute-0 podman[282699]: 2025-11-22 06:03:04.121529364 +0000 UTC m=+0.927486054 container died 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 06:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eb062c6b3e3b533b084be2276ebc2dd8efaee511247c783ec78f51878be35b2-merged.mount: Deactivated successfully.
Nov 22 06:03:04 compute-0 podman[282699]: 2025-11-22 06:03:04.191969379 +0000 UTC m=+0.997926019 container remove 05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 06:03:04 compute-0 systemd[1]: libpod-conmon-05923fd5bcd41f4f571dcab3ead3a00c7ffd6d145e22684ebcac706722a46e90.scope: Deactivated successfully.
Nov 22 06:03:04 compute-0 sudo[282592]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:04 compute-0 sudo[282739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:03:04 compute-0 sudo[282739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:04 compute-0 sudo[282739]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:04 compute-0 sudo[282764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:03:04 compute-0 sudo[282764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:04 compute-0 sudo[282764]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:04 compute-0 sudo[282789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:03:04 compute-0 sudo[282789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:04 compute-0 sudo[282789]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:04 compute-0 sudo[282814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:03:04 compute-0 sudo[282814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.809733342 +0000 UTC m=+0.055689212 container create fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:03:04 compute-0 systemd[1]: Started libpod-conmon-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope.
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.782680838 +0000 UTC m=+0.028636748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.910066268 +0000 UTC m=+0.156022178 container init fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.917152987 +0000 UTC m=+0.163108817 container start fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.920847725 +0000 UTC m=+0.166803595 container attach fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:03:04 compute-0 fervent_ride[282895]: 167 167
Nov 22 06:03:04 compute-0 systemd[1]: libpod-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope: Deactivated successfully.
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.92513747 +0000 UTC m=+0.171093340 container died fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 06:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c85bc458c14c0a3b3ed1ec4521d5c76920a64a0da8759266cff503d1e802d15a-merged.mount: Deactivated successfully.
Nov 22 06:03:04 compute-0 podman[282879]: 2025-11-22 06:03:04.970678389 +0000 UTC m=+0.216634209 container remove fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:03:04 compute-0 systemd[1]: libpod-conmon-fa9ad56c13e4e455cd9b84c8a26a2b58084837576794feb6fbf807e2af958097.scope: Deactivated successfully.
Nov 22 06:03:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:05 compute-0 rsyslogd[1005]: imjournal: 18163 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 06:03:05 compute-0 podman[282919]: 2025-11-22 06:03:05.151044097 +0000 UTC m=+0.057294725 container create b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 06:03:05 compute-0 systemd[1]: Started libpod-conmon-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope.
Nov 22 06:03:05 compute-0 podman[282919]: 2025-11-22 06:03:05.122877893 +0000 UTC m=+0.029128571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:03:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:03:05 compute-0 podman[282919]: 2025-11-22 06:03:05.268064168 +0000 UTC m=+0.174314846 container init b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:03:05 compute-0 podman[282919]: 2025-11-22 06:03:05.279781442 +0000 UTC m=+0.186032040 container start b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:03:05 compute-0 podman[282919]: 2025-11-22 06:03:05.283021258 +0000 UTC m=+0.189271946 container attach b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:03:06 compute-0 ceph-mon[75840]: pgmap v1310: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:06 compute-0 angry_ride[282936]: {
Nov 22 06:03:06 compute-0 angry_ride[282936]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_id": 1,
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "type": "bluestore"
Nov 22 06:03:06 compute-0 angry_ride[282936]:     },
Nov 22 06:03:06 compute-0 angry_ride[282936]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_id": 2,
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "type": "bluestore"
Nov 22 06:03:06 compute-0 angry_ride[282936]:     },
Nov 22 06:03:06 compute-0 angry_ride[282936]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_id": 0,
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:03:06 compute-0 angry_ride[282936]:         "type": "bluestore"
Nov 22 06:03:06 compute-0 angry_ride[282936]:     }
Nov 22 06:03:06 compute-0 angry_ride[282936]: }
Nov 22 06:03:06 compute-0 systemd[1]: libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Deactivated successfully.
Nov 22 06:03:06 compute-0 systemd[1]: libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Consumed 1.097s CPU time.
Nov 22 06:03:06 compute-0 conmon[282936]: conmon b3b9bc73b51b08e15a46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope/container/memory.events
Nov 22 06:03:06 compute-0 podman[282919]: 2025-11-22 06:03:06.372798104 +0000 UTC m=+1.279048702 container died b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 06:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-3842aa0a4bca1bdaf165b21371696bf8d45b922e4f04a4f028444ad3f82a8cf5-merged.mount: Deactivated successfully.
Nov 22 06:03:06 compute-0 podman[282919]: 2025-11-22 06:03:06.434588708 +0000 UTC m=+1.340839306 container remove b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ride, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:03:06 compute-0 systemd[1]: libpod-conmon-b3b9bc73b51b08e15a46197ab154d7447dcfab807f91ed3b6ce671729de49902.scope: Deactivated successfully.
Nov 22 06:03:06 compute-0 sudo[282814]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:03:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:03:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:03:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:03:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5ce001fc-73ec-43d7-bdd9-b0d92d548003 does not exist
Nov 22 06:03:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f619ebac-5ce0-4134-9420-217e9a53f055 does not exist
Nov 22 06:03:06 compute-0 podman[282970]: 2025-11-22 06:03:06.5258191 +0000 UTC m=+0.114083815 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 06:03:06 compute-0 sudo[283000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:03:06 compute-0 sudo[283000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:06 compute-0 sudo[283000]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:06 compute-0 sudo[283030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:03:06 compute-0 sudo[283030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:03:06 compute-0 sudo[283030]: pam_unix(sudo:session): session closed for user root
Nov 22 06:03:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:03:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:03:07 compute-0 ceph-mon[75840]: pgmap v1311: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:10 compute-0 ceph-mon[75840]: pgmap v1312: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:12 compute-0 ceph-mon[75840]: pgmap v1313: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:14 compute-0 ceph-mon[75840]: pgmap v1314: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:15 compute-0 ceph-mon[75840]: pgmap v1315: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:16 compute-0 podman[283055]: 2025-11-22 06:03:16.229633705 +0000 UTC m=+0.078049629 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 22 06:03:16 compute-0 podman[283056]: 2025-11-22 06:03:16.249323822 +0000 UTC m=+0.103287115 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Nov 22 06:03:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:18 compute-0 ceph-mon[75840]: pgmap v1316: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.167 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.168 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.169 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:03:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:03:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3161789297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.680 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.889 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.891 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4954MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.892 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.957 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.957 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:03:18 compute-0 nova_compute[255660]: 2025-11-22 06:03:18.972 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:03:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3161789297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:03:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:03:19 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645355972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:03:19 compute-0 nova_compute[255660]: 2025-11-22 06:03:19.462 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:03:19 compute-0 nova_compute[255660]: 2025-11-22 06:03:19.468 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:03:19 compute-0 nova_compute[255660]: 2025-11-22 06:03:19.484 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:03:19 compute-0 nova_compute[255660]: 2025-11-22 06:03:19.487 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:03:19 compute-0 nova_compute[255660]: 2025-11-22 06:03:19.487 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:03:20 compute-0 ceph-mon[75840]: pgmap v1317: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:20 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1645355972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:03:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:22 compute-0 ceph-mon[75840]: pgmap v1318: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:23 compute-0 ceph-mon[75840]: pgmap v1319: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:25 compute-0 nova_compute[255660]: 2025-11-22 06:03:25.483 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:26 compute-0 ceph-mon[75840]: pgmap v1320: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:26 compute-0 nova_compute[255660]: 2025-11-22 06:03:26.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:26 compute-0 nova_compute[255660]: 2025-11-22 06:03:26.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:03:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:27 compute-0 nova_compute[255660]: 2025-11-22 06:03:27.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:28 compute-0 ceph-mon[75840]: pgmap v1321: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:28 compute-0 nova_compute[255660]: 2025-11-22 06:03:28.124 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:29 compute-0 nova_compute[255660]: 2025-11-22 06:03:29.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:30 compute-0 ceph-mon[75840]: pgmap v1322: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:31 compute-0 nova_compute[255660]: 2025-11-22 06:03:31.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:31 compute-0 nova_compute[255660]: 2025-11-22 06:03:31.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:32 compute-0 ceph-mon[75840]: pgmap v1323: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:32 compute-0 nova_compute[255660]: 2025-11-22 06:03:32.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:34 compute-0 ceph-mon[75840]: pgmap v1324: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:35 compute-0 nova_compute[255660]: 2025-11-22 06:03:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:03:35 compute-0 nova_compute[255660]: 2025-11-22 06:03:35.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:03:35 compute-0 nova_compute[255660]: 2025-11-22 06:03:35.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:03:35 compute-0 nova_compute[255660]: 2025-11-22 06:03:35.145 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:03:36 compute-0 ceph-mon[75840]: pgmap v1325: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.944 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:03:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:03:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:03:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:03:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:37 compute-0 ceph-mon[75840]: pgmap v1326: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:37 compute-0 podman[283138]: 2025-11-22 06:03:37.242499072 +0000 UTC m=+0.101852947 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 22 06:03:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:40 compute-0 ceph-mon[75840]: pgmap v1327: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:42 compute-0 ceph-mon[75840]: pgmap v1328: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:03:43
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'vms', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:03:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:03:44 compute-0 ceph-mon[75840]: pgmap v1329: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:03:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:03:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:46 compute-0 ceph-mon[75840]: pgmap v1330: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:47 compute-0 podman[283166]: 2025-11-22 06:03:47.213730474 +0000 UTC m=+0.063584403 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 06:03:47 compute-0 podman[283165]: 2025-11-22 06:03:47.238560839 +0000 UTC m=+0.089891727 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 06:03:48 compute-0 ceph-mon[75840]: pgmap v1331: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:49 compute-0 ceph-mon[75840]: pgmap v1332: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:52 compute-0 ceph-mon[75840]: pgmap v1333: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:03:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:03:53 compute-0 ceph-mon[75840]: pgmap v1334: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:55 compute-0 ceph-mon[75840]: pgmap v1335: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:03:57 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6557 writes, 30K keys, 6557 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6557 writes, 6557 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1710 writes, 8330 keys, 1710 commit groups, 1.0 writes per commit group, ingest: 10.39 MB, 0.02 MB/s
                                           Interval WAL: 1710 writes, 1710 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.0      0.31              0.13        16    0.019       0      0       0.0       0.0
                                             L6      1/0    8.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    135.7    111.3      1.00              0.40        15    0.067     72K   8389       0.0       0.0
                                            Sum      1/0    8.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    103.7    109.8      1.31              0.54        31    0.042     72K   8389       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0    103.6    106.0      0.41              0.14         8    0.051     24K   2605       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.7    111.3      1.00              0.40        15    0.067     72K   8389       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    108.2      0.30              0.13        15    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.5      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.3 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdfafc91f0#2 capacity: 304.00 MB usage: 15.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00024 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1246,15.18 MB,4.9939%) FilterBlock(32,211.23 KB,0.0678564%) IndexBlock(32,389.08 KB,0.124987%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 06:03:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:58 compute-0 ceph-mon[75840]: pgmap v1336: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:03:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:03:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:00 compute-0 ceph-mon[75840]: pgmap v1337: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:01 compute-0 ceph-mon[75840]: pgmap v1338: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:04 compute-0 ceph-mon[75840]: pgmap v1339: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:06 compute-0 ceph-mon[75840]: pgmap v1340: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:06 compute-0 sudo[283199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:06 compute-0 sudo[283199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:06 compute-0 sudo[283199]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:06 compute-0 sudo[283224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:04:06 compute-0 sudo[283224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:06 compute-0 sudo[283224]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:06 compute-0 sudo[283249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:06 compute-0 sudo[283249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:06 compute-0 sudo[283249]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:06 compute-0 sudo[283274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:04:06 compute-0 sudo[283274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:07 compute-0 sudo[283274]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5c923816-e00b-415c-bb75-0c16c40f27f7 does not exist
Nov 22 06:04:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 45b6737c-0337-4b8f-860a-991a7b628c29 does not exist
Nov 22 06:04:07 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 62679f6e-2889-42d7-b125-ddd058eb63f3 does not exist
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:04:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:04:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:04:07 compute-0 sudo[283331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:07 compute-0 sudo[283331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:07 compute-0 sudo[283331]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:07 compute-0 sudo[283362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:04:07 compute-0 sudo[283362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:07 compute-0 sudo[283362]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:07 compute-0 podman[283355]: 2025-11-22 06:04:07.900149783 +0000 UTC m=+0.107737675 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 06:04:07 compute-0 sudo[283399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:07 compute-0 sudo[283399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:07 compute-0 sudo[283399]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:07 compute-0 sudo[283429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:04:08 compute-0 sudo[283429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:08 compute-0 ceph-mon[75840]: pgmap v1341: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:04:08 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.446122915 +0000 UTC m=+0.056234666 container create e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 06:04:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:08 compute-0 systemd[1]: Started libpod-conmon-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope.
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.419057561 +0000 UTC m=+0.029169312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.562989882 +0000 UTC m=+0.173101623 container init e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.575754824 +0000 UTC m=+0.185866535 container start e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.580272406 +0000 UTC m=+0.190384157 container attach e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 06:04:08 compute-0 lucid_cori[283512]: 167 167
Nov 22 06:04:08 compute-0 systemd[1]: libpod-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope: Deactivated successfully.
Nov 22 06:04:08 compute-0 conmon[283512]: conmon e9116e9a8eec64c96d11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope/container/memory.events
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.586138122 +0000 UTC m=+0.196249833 container died e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 06:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0477f754f42a9163ceebe41f236b5b55d54a5eea7ed928e1e7be5075fac45179-merged.mount: Deactivated successfully.
Nov 22 06:04:08 compute-0 podman[283496]: 2025-11-22 06:04:08.657716398 +0000 UTC m=+0.267828119 container remove e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_cori, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 06:04:08 compute-0 systemd[1]: libpod-conmon-e9116e9a8eec64c96d1184d76591054b596f84926f0fa11cec96bd75cd691a97.scope: Deactivated successfully.
Nov 22 06:04:08 compute-0 podman[283538]: 2025-11-22 06:04:08.892846901 +0000 UTC m=+0.056320628 container create a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:04:08 compute-0 systemd[1]: Started libpod-conmon-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope.
Nov 22 06:04:08 compute-0 podman[283538]: 2025-11-22 06:04:08.868214031 +0000 UTC m=+0.031687868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:08 compute-0 podman[283538]: 2025-11-22 06:04:08.994116491 +0000 UTC m=+0.157590228 container init a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:04:09 compute-0 podman[283538]: 2025-11-22 06:04:09.005103505 +0000 UTC m=+0.168577222 container start a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 06:04:09 compute-0 podman[283538]: 2025-11-22 06:04:09.008440245 +0000 UTC m=+0.171913962 container attach a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:04:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:09 compute-0 ceph-mon[75840]: pgmap v1342: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:10 compute-0 modest_lumiere[283555]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:04:10 compute-0 modest_lumiere[283555]: --> relative data size: 1.0
Nov 22 06:04:10 compute-0 modest_lumiere[283555]: --> All data devices are unavailable
Nov 22 06:04:10 compute-0 systemd[1]: libpod-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Deactivated successfully.
Nov 22 06:04:10 compute-0 podman[283538]: 2025-11-22 06:04:10.13706126 +0000 UTC m=+1.300535007 container died a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:04:10 compute-0 systemd[1]: libpod-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Consumed 1.085s CPU time.
Nov 22 06:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-379ed915c1a7b1fc7e6cf1a82fe4846785fc23a02caf1e4c9d0a9a9964589930-merged.mount: Deactivated successfully.
Nov 22 06:04:10 compute-0 podman[283538]: 2025-11-22 06:04:10.2084232 +0000 UTC m=+1.371896947 container remove a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:04:10 compute-0 systemd[1]: libpod-conmon-a246d16597f5b634c625ae3ede07a38df7e5532dc8e791dd64eca0b3da32a790.scope: Deactivated successfully.
Nov 22 06:04:10 compute-0 sudo[283429]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:10 compute-0 sudo[283599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:10 compute-0 sudo[283599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:10 compute-0 sudo[283599]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:10 compute-0 sudo[283624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:04:10 compute-0 sudo[283624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:10 compute-0 sudo[283624]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:10 compute-0 sudo[283649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:10 compute-0 sudo[283649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:10 compute-0 sudo[283649]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:10 compute-0 sudo[283674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:04:10 compute-0 sudo[283674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.086445079 +0000 UTC m=+0.064125447 container create 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:04:11 compute-0 systemd[1]: Started libpod-conmon-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope.
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.062621152 +0000 UTC m=+0.040301600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.184504473 +0000 UTC m=+0.162184861 container init 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.195796126 +0000 UTC m=+0.173476494 container start 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.199903636 +0000 UTC m=+0.177584034 container attach 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:04:11 compute-0 sweet_hermann[283755]: 167 167
Nov 22 06:04:11 compute-0 systemd[1]: libpod-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope: Deactivated successfully.
Nov 22 06:04:11 compute-0 conmon[283755]: conmon 11a2df632166b094b6d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope/container/memory.events
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.203234934 +0000 UTC m=+0.180915292 container died 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 06:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2a6167f8789594a165953546ac4805187a64685a41c18b19cee3d7ef6fe302-merged.mount: Deactivated successfully.
Nov 22 06:04:11 compute-0 podman[283739]: 2025-11-22 06:04:11.254306952 +0000 UTC m=+0.231987340 container remove 11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:04:11 compute-0 systemd[1]: libpod-conmon-11a2df632166b094b6d8cfc07bfcf91885898473746ddbec82d2692ab41ddfce.scope: Deactivated successfully.
Nov 22 06:04:11 compute-0 podman[283779]: 2025-11-22 06:04:11.529210089 +0000 UTC m=+0.091695636 container create e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 06:04:11 compute-0 podman[283779]: 2025-11-22 06:04:11.471206116 +0000 UTC m=+0.033691723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:11 compute-0 systemd[1]: Started libpod-conmon-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope.
Nov 22 06:04:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:11 compute-0 podman[283779]: 2025-11-22 06:04:11.630819638 +0000 UTC m=+0.193305225 container init e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:04:11 compute-0 podman[283779]: 2025-11-22 06:04:11.648810619 +0000 UTC m=+0.211296126 container start e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:04:11 compute-0 podman[283779]: 2025-11-22 06:04:11.652504178 +0000 UTC m=+0.214989685 container attach e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 06:04:12 compute-0 ceph-mon[75840]: pgmap v1343: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]: {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     "0": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "devices": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "/dev/loop3"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             ],
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_name": "ceph_lv0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_size": "21470642176",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "name": "ceph_lv0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "tags": {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_name": "ceph",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.crush_device_class": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.encrypted": "0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_id": "0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.vdo": "0"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             },
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "vg_name": "ceph_vg0"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         }
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     ],
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     "1": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "devices": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "/dev/loop4"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             ],
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_name": "ceph_lv1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_size": "21470642176",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "name": "ceph_lv1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "tags": {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_name": "ceph",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.crush_device_class": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.encrypted": "0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_id": "1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.vdo": "0"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             },
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "vg_name": "ceph_vg1"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         }
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     ],
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     "2": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "devices": [
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "/dev/loop5"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             ],
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_name": "ceph_lv2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_size": "21470642176",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "name": "ceph_lv2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "tags": {
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.cluster_name": "ceph",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.crush_device_class": "",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.encrypted": "0",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osd_id": "2",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:                 "ceph.vdo": "0"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             },
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "type": "block",
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:             "vg_name": "ceph_vg2"
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:         }
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]:     ]
Nov 22 06:04:12 compute-0 blissful_pasteur[283796]: }
Nov 22 06:04:12 compute-0 systemd[1]: libpod-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope: Deactivated successfully.
Nov 22 06:04:12 compute-0 podman[283779]: 2025-11-22 06:04:12.443990641 +0000 UTC m=+1.006476148 container died e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5fcb82d9160171b9799735cc6a19266c2dc904041a71a0f1f1f01260127b954-merged.mount: Deactivated successfully.
Nov 22 06:04:12 compute-0 podman[283779]: 2025-11-22 06:04:12.503388121 +0000 UTC m=+1.065873638 container remove e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:04:12 compute-0 systemd[1]: libpod-conmon-e93ffbf69f286fd196425a1b0d18335179a14b9214143711f763bc86a9982180.scope: Deactivated successfully.
Nov 22 06:04:12 compute-0 sudo[283674]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:12 compute-0 sudo[283818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:12 compute-0 sudo[283818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:12 compute-0 sudo[283818]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:12 compute-0 sudo[283843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:04:12 compute-0 sudo[283843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:12 compute-0 sudo[283843]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:12 compute-0 sudo[283868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:12 compute-0 sudo[283868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:12 compute-0 sudo[283868]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:12 compute-0 sudo[283893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:04:12 compute-0 sudo[283893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.291267747 +0000 UTC m=+0.066191752 container create d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 06:04:13 compute-0 systemd[1]: Started libpod-conmon-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope.
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.264527411 +0000 UTC m=+0.039451506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.378820301 +0000 UTC m=+0.153744376 container init d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.39078267 +0000 UTC m=+0.165706715 container start d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.395073546 +0000 UTC m=+0.169997601 container attach d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:04:13 compute-0 systemd[1]: libpod-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope: Deactivated successfully.
Nov 22 06:04:13 compute-0 agitated_mahavira[283974]: 167 167
Nov 22 06:04:13 compute-0 conmon[283974]: conmon d0ff49ac4730608a5738 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope/container/memory.events
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.397618483 +0000 UTC m=+0.172542548 container died d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 06:04:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-40fa16042e57314477069bb1c1f1d002074b2bd30d0d043d09f2152043281f13-merged.mount: Deactivated successfully.
Nov 22 06:04:13 compute-0 podman[283957]: 2025-11-22 06:04:13.447504219 +0000 UTC m=+0.222428254 container remove d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 06:04:13 compute-0 systemd[1]: libpod-conmon-d0ff49ac4730608a5738e97e44a292bdcb72508e29fbded38bf45f9abc69989a.scope: Deactivated successfully.
Nov 22 06:04:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:13 compute-0 podman[283996]: 2025-11-22 06:04:13.653145533 +0000 UTC m=+0.065481434 container create 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:04:13 compute-0 systemd[1]: Started libpod-conmon-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope.
Nov 22 06:04:13 compute-0 podman[283996]: 2025-11-22 06:04:13.625688027 +0000 UTC m=+0.038023968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:04:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:04:13 compute-0 podman[283996]: 2025-11-22 06:04:13.772395624 +0000 UTC m=+0.184731575 container init 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 06:04:13 compute-0 podman[283996]: 2025-11-22 06:04:13.783548922 +0000 UTC m=+0.195884813 container start 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:04:13 compute-0 podman[283996]: 2025-11-22 06:04:13.792971015 +0000 UTC m=+0.205306906 container attach 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:14 compute-0 ceph-mon[75840]: pgmap v1344: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:14 compute-0 eager_ellis[284012]: {
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_id": 1,
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "type": "bluestore"
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     },
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_id": 2,
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "type": "bluestore"
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     },
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_id": 0,
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:04:14 compute-0 eager_ellis[284012]:         "type": "bluestore"
Nov 22 06:04:14 compute-0 eager_ellis[284012]:     }
Nov 22 06:04:14 compute-0 eager_ellis[284012]: }
Nov 22 06:04:14 compute-0 systemd[1]: libpod-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Deactivated successfully.
Nov 22 06:04:14 compute-0 podman[283996]: 2025-11-22 06:04:14.892007229 +0000 UTC m=+1.304343140 container died 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 06:04:14 compute-0 systemd[1]: libpod-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Consumed 1.117s CPU time.
Nov 22 06:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f53da18b08346c7399210212a8ba32b43dc9ec533f5826ccfc641ee10de4283-merged.mount: Deactivated successfully.
Nov 22 06:04:14 compute-0 podman[283996]: 2025-11-22 06:04:14.947568805 +0000 UTC m=+1.359904656 container remove 6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_ellis, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:04:14 compute-0 systemd[1]: libpod-conmon-6b865f5be211d0ba64c191928162639d30aee8aab2e5175660d684e3c255b929.scope: Deactivated successfully.
Nov 22 06:04:14 compute-0 sudo[283893]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:04:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:04:15 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:15 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 23417d8c-0ca7-4982-8541-dfbe674a1eb4 does not exist
Nov 22 06:04:15 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9794301b-9e76-467f-9578-7f3b5624a2ce does not exist
Nov 22 06:04:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:15 compute-0 sudo[284057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:04:15 compute-0 sudo[284057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:15 compute-0 sudo[284057]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:15 compute-0 sudo[284082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:04:15 compute-0 sudo[284082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:04:15 compute-0 sudo[284082]: pam_unix(sudo:session): session closed for user root
Nov 22 06:04:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:04:16 compute-0 ceph-mon[75840]: pgmap v1345: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:18 compute-0 ceph-mon[75840]: pgmap v1346: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:18 compute-0 podman[284108]: 2025-11-22 06:04:18.225363723 +0000 UTC m=+0.076477837 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 22 06:04:18 compute-0 podman[284107]: 2025-11-22 06:04:18.235835864 +0000 UTC m=+0.083769393 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:04:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:20 compute-0 ceph-mon[75840]: pgmap v1347: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.200 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.200 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.201 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:04:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:04:20 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2599743546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.634 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.828 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.830 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4952MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.830 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.831 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.900 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.901 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:04:20 compute-0 nova_compute[255660]: 2025-11-22 06:04:20.917 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:04:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:21 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2599743546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:04:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:04:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895548595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:04:21 compute-0 nova_compute[255660]: 2025-11-22 06:04:21.408 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:04:21 compute-0 nova_compute[255660]: 2025-11-22 06:04:21.415 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:04:21 compute-0 nova_compute[255660]: 2025-11-22 06:04:21.427 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:04:21 compute-0 nova_compute[255660]: 2025-11-22 06:04:21.429 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:04:21 compute-0 nova_compute[255660]: 2025-11-22 06:04:21.430 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:04:22 compute-0 ceph-mon[75840]: pgmap v1348: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3895548595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:04:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:23 compute-0 ceph-mon[75840]: pgmap v1349: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:26 compute-0 ceph-mon[75840]: pgmap v1350: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:27 compute-0 ceph-mon[75840]: pgmap v1351: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:29 compute-0 nova_compute[255660]: 2025-11-22 06:04:29.427 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:29 compute-0 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:29 compute-0 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:29 compute-0 nova_compute[255660]: 2025-11-22 06:04:29.428 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:04:30 compute-0 ceph-mon[75840]: pgmap v1352: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:30 compute-0 nova_compute[255660]: 2025-11-22 06:04:30.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:32 compute-0 ceph-mon[75840]: pgmap v1353: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:32 compute-0 nova_compute[255660]: 2025-11-22 06:04:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:32 compute-0 nova_compute[255660]: 2025-11-22 06:04:32.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:33 compute-0 nova_compute[255660]: 2025-11-22 06:04:33.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:33 compute-0 ceph-mon[75840]: pgmap v1354: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:36 compute-0 ceph-mon[75840]: pgmap v1355: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.945 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:04:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.946 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:04:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:04:36.946 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:04:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:37 compute-0 nova_compute[255660]: 2025-11-22 06:04:37.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:04:37 compute-0 nova_compute[255660]: 2025-11-22 06:04:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:04:37 compute-0 nova_compute[255660]: 2025-11-22 06:04:37.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:04:37 compute-0 nova_compute[255660]: 2025-11-22 06:04:37.160 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:04:38 compute-0 ceph-mon[75840]: pgmap v1356: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:38 compute-0 podman[284188]: 2025-11-22 06:04:38.259515742 +0000 UTC m=+0.120160707 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 06:04:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:40 compute-0 ceph-mon[75840]: pgmap v1357: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:42 compute-0 ceph-mon[75840]: pgmap v1358: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:43 compute-0 ceph-mon[75840]: pgmap v1359: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:04:43
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:04:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:04:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:04:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:46 compute-0 ceph-mon[75840]: pgmap v1360: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:04:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:04:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:04:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:04:48 compute-0 ceph-mon[75840]: pgmap v1361: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:04:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2118759774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:04:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:49 compute-0 podman[284215]: 2025-11-22 06:04:49.236098753 +0000 UTC m=+0.081333098 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 22 06:04:49 compute-0 podman[284214]: 2025-11-22 06:04:49.236177635 +0000 UTC m=+0.094498991 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:04:50 compute-0 ceph-mon[75840]: pgmap v1362: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:51 compute-0 ceph-mon[75840]: pgmap v1363: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.126785) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493126829, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2408, "num_deletes": 510, "total_data_size": 3487732, "memory_usage": 3560592, "flush_reason": "Manual Compaction"}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493153144, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3431555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28627, "largest_seqno": 31034, "table_properties": {"data_size": 3420894, "index_size": 6323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25700, "raw_average_key_size": 19, "raw_value_size": 3397164, "raw_average_value_size": 2623, "num_data_blocks": 279, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791275, "oldest_key_time": 1763791275, "file_creation_time": 1763791493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 26440 microseconds, and 13944 cpu microseconds.
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.153220) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3431555 bytes OK
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.153248) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155188) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155208) EVENT_LOG_v1 {"time_micros": 1763791493155201, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.155231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3476390, prev total WAL file size 3476390, number of live WAL files 2.
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.156861) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3351KB)], [62(8237KB)]
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493156903, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11866657, "oldest_snapshot_seqno": -1}
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:04:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6097 keys, 10192949 bytes, temperature: kUnknown
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493237443, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10192949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10149656, "index_size": 26927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 153763, "raw_average_key_size": 25, "raw_value_size": 10037910, "raw_average_value_size": 1646, "num_data_blocks": 1101, "num_entries": 6097, "num_filter_entries": 6097, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791493, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.237855) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10192949 bytes
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.240684) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 7133, records dropped: 1036 output_compression: NoCompression
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.240714) EVENT_LOG_v1 {"time_micros": 1763791493240700, "job": 34, "event": "compaction_finished", "compaction_time_micros": 80667, "compaction_time_cpu_micros": 20471, "output_level": 6, "num_output_files": 1, "total_output_size": 10192949, "num_input_records": 7133, "num_output_records": 6097, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493241870, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791493244643, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.156768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:04:53.244713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:04:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:54 compute-0 ceph-mon[75840]: pgmap v1364: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:55 compute-0 sshd-session[284253]: Invalid user validator from 80.94.92.166 port 56934
Nov 22 06:04:55 compute-0 sshd-session[284253]: Connection closed by invalid user validator 80.94.92.166 port 56934 [preauth]
Nov 22 06:04:56 compute-0 ceph-mon[75840]: pgmap v1365: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:57 compute-0 ceph-mon[75840]: pgmap v1366: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:04:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:04:59 compute-0 ceph-mon[75840]: pgmap v1367: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:02 compute-0 ceph-mon[75840]: pgmap v1368: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:04 compute-0 ceph-mon[75840]: pgmap v1369: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:06 compute-0 ceph-mon[75840]: pgmap v1370: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:07 compute-0 ceph-mon[75840]: pgmap v1371: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:09 compute-0 podman[284255]: 2025-11-22 06:05:09.254603914 +0000 UTC m=+0.112353238 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:10 compute-0 ceph-mon[75840]: pgmap v1372: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:12 compute-0 ceph-mon[75840]: pgmap v1373: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:14 compute-0 ceph-mon[75840]: pgmap v1374: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:15 compute-0 ceph-mon[75840]: pgmap v1375: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:15 compute-0 sudo[284281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:15 compute-0 sudo[284281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:15 compute-0 sudo[284281]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:15 compute-0 sudo[284306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:05:15 compute-0 sudo[284306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:15 compute-0 sudo[284306]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:15 compute-0 sudo[284331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:15 compute-0 sudo[284331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:15 compute-0 sudo[284331]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:15 compute-0 sudo[284356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:05:15 compute-0 sudo[284356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:16 compute-0 nova_compute[255660]: 2025-11-22 06:05:16.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:16 compute-0 nova_compute[255660]: 2025-11-22 06:05:16.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 06:05:16 compute-0 sudo[284356]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:16 compute-0 nova_compute[255660]: 2025-11-22 06:05:16.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:16 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 477595a4-66f0-4d44-9a75-733de6f74c5a does not exist
Nov 22 06:05:16 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev b7df0aa4-7142-45a6-9890-5007a28f93f9 does not exist
Nov 22 06:05:16 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev b91d56ad-89ec-42f3-9fc5-6642b37343a7 does not exist
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:05:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:05:16 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:05:16 compute-0 sudo[284412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:16 compute-0 sudo[284412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:16 compute-0 sudo[284412]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:16 compute-0 sudo[284437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:05:16 compute-0 sudo[284437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:16 compute-0 sudo[284437]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:16 compute-0 sudo[284462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:16 compute-0 sudo[284462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:16 compute-0 sudo[284462]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:16 compute-0 sudo[284487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:05:16 compute-0 sudo[284487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:16 compute-0 podman[284553]: 2025-11-22 06:05:16.945769906 +0000 UTC m=+0.073768794 container create 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 06:05:16 compute-0 systemd[1]: Started libpod-conmon-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope.
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:16.915323291 +0000 UTC m=+0.043322219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:17.045538436 +0000 UTC m=+0.173537304 container init 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:17.058082252 +0000 UTC m=+0.186081100 container start 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:17.063159248 +0000 UTC m=+0.191158186 container attach 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 06:05:17 compute-0 mystifying_joliot[284569]: 167 167
Nov 22 06:05:17 compute-0 systemd[1]: libpod-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope: Deactivated successfully.
Nov 22 06:05:17 compute-0 conmon[284569]: conmon 46782d17abf944dd53bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope/container/memory.events
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:17.066509597 +0000 UTC m=+0.194508485 container died 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 06:05:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-60d64d5658c58d7d6830acdcf093e30ed2268d2b92df87409b3cbe5df661bd60-merged.mount: Deactivated successfully.
Nov 22 06:05:17 compute-0 podman[284553]: 2025-11-22 06:05:17.120656157 +0000 UTC m=+0.248655015 container remove 46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_joliot, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 06:05:17 compute-0 systemd[1]: libpod-conmon-46782d17abf944dd53bf855a4bd4149973cb12d02f92b72aa1ae55bc4666cf9e.scope: Deactivated successfully.
Nov 22 06:05:17 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:05:17 compute-0 ceph-mon[75840]: pgmap v1376: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:17 compute-0 podman[284593]: 2025-11-22 06:05:17.323610049 +0000 UTC m=+0.048087098 container create 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:17 compute-0 systemd[1]: Started libpod-conmon-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope.
Nov 22 06:05:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:17 compute-0 podman[284593]: 2025-11-22 06:05:17.303596163 +0000 UTC m=+0.028073232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:17 compute-0 podman[284593]: 2025-11-22 06:05:17.419052213 +0000 UTC m=+0.143529342 container init 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:17 compute-0 podman[284593]: 2025-11-22 06:05:17.425715651 +0000 UTC m=+0.150192690 container start 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 06:05:17 compute-0 podman[284593]: 2025-11-22 06:05:17.429528244 +0000 UTC m=+0.154005313 container attach 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 06:05:18 compute-0 exciting_cray[284609]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:05:18 compute-0 exciting_cray[284609]: --> relative data size: 1.0
Nov 22 06:05:18 compute-0 exciting_cray[284609]: --> All data devices are unavailable
Nov 22 06:05:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:18 compute-0 systemd[1]: libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Deactivated successfully.
Nov 22 06:05:18 compute-0 systemd[1]: libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Consumed 1.019s CPU time.
Nov 22 06:05:18 compute-0 conmon[284609]: conmon 0668bc39968635d30291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope/container/memory.events
Nov 22 06:05:18 compute-0 podman[284593]: 2025-11-22 06:05:18.485550906 +0000 UTC m=+1.210027975 container died 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-899891c7ef432f30f84bf4811e1f29ccccbe5ee73c6e188d325cb7a36744ee4c-merged.mount: Deactivated successfully.
Nov 22 06:05:18 compute-0 podman[284593]: 2025-11-22 06:05:18.536545501 +0000 UTC m=+1.261022570 container remove 0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_cray, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:05:18 compute-0 systemd[1]: libpod-conmon-0668bc39968635d302915f769533b84d1d32c9cf7684022231fdfd7a35879f0f.scope: Deactivated successfully.
Nov 22 06:05:18 compute-0 sudo[284487]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:18 compute-0 sudo[284652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:18 compute-0 sudo[284652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:18 compute-0 sudo[284652]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:18 compute-0 sudo[284677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:05:18 compute-0 sudo[284677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:18 compute-0 sudo[284677]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:18 compute-0 sudo[284702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:18 compute-0 sudo[284702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:18 compute-0 sudo[284702]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:18 compute-0 sudo[284727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:05:18 compute-0 sudo[284727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.264225216 +0000 UTC m=+0.050749989 container create 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 06:05:19 compute-0 systemd[1]: Started libpod-conmon-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope.
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.240626904 +0000 UTC m=+0.027151747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.361232502 +0000 UTC m=+0.147757295 container init 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.371658952 +0000 UTC m=+0.158183735 container start 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 06:05:19 compute-0 elastic_mendel[284818]: 167 167
Nov 22 06:05:19 compute-0 systemd[1]: libpod-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope: Deactivated successfully.
Nov 22 06:05:19 compute-0 conmon[284818]: conmon 9752bcde6197a01a6e9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope/container/memory.events
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.378417643 +0000 UTC m=+0.164942426 container attach 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.379049519 +0000 UTC m=+0.165574302 container died 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 06:05:19 compute-0 podman[284812]: 2025-11-22 06:05:19.389698775 +0000 UTC m=+0.068002042 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:19 compute-0 podman[284810]: 2025-11-22 06:05:19.392957201 +0000 UTC m=+0.082732735 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 22 06:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a100f25fd2857b653c30674d7db25ee16024e8e290b950826861ed4207a682c-merged.mount: Deactivated successfully.
Nov 22 06:05:19 compute-0 podman[284794]: 2025-11-22 06:05:19.415662409 +0000 UTC m=+0.202187172 container remove 9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:05:19 compute-0 systemd[1]: libpod-conmon-9752bcde6197a01a6e9d404f1ef8224b679b5a50635790d4ac699c0392445fdb.scope: Deactivated successfully.
Nov 22 06:05:19 compute-0 podman[284874]: 2025-11-22 06:05:19.60365156 +0000 UTC m=+0.056538694 container create 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 06:05:19 compute-0 systemd[1]: Started libpod-conmon-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope.
Nov 22 06:05:19 compute-0 podman[284874]: 2025-11-22 06:05:19.575133447 +0000 UTC m=+0.028020641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:19 compute-0 podman[284874]: 2025-11-22 06:05:19.714325353 +0000 UTC m=+0.167212547 container init 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:05:19 compute-0 podman[284874]: 2025-11-22 06:05:19.725035359 +0000 UTC m=+0.177922493 container start 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 06:05:19 compute-0 podman[284874]: 2025-11-22 06:05:19.728916183 +0000 UTC m=+0.181803327 container attach 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 06:05:20 compute-0 ceph-mon[75840]: pgmap v1377: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]: {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     "0": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "devices": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "/dev/loop3"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             ],
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_name": "ceph_lv0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_size": "21470642176",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "name": "ceph_lv0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "tags": {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_name": "ceph",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.crush_device_class": "",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.encrypted": "0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_id": "0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.vdo": "0"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             },
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "vg_name": "ceph_vg0"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         }
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     ],
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     "1": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "devices": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "/dev/loop4"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             ],
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_name": "ceph_lv1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_size": "21470642176",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "name": "ceph_lv1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "tags": {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_name": "ceph",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.crush_device_class": "",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.encrypted": "0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_id": "1",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.vdo": "0"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             },
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "vg_name": "ceph_vg1"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         }
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     ],
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     "2": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "devices": [
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "/dev/loop5"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             ],
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_name": "ceph_lv2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_size": "21470642176",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "name": "ceph_lv2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "tags": {
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:05:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.cluster_name": "ceph",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.crush_device_class": "",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.encrypted": "0",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osd_id": "2",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:                 "ceph.vdo": "0"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             },
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "type": "block",
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:             "vg_name": "ceph_vg2"
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:         }
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]:     ]
Nov 22 06:05:20 compute-0 hopeful_blackburn[284892]: }
Nov 22 06:05:20 compute-0 systemd[1]: libpod-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope: Deactivated successfully.
Nov 22 06:05:20 compute-0 podman[284874]: 2025-11-22 06:05:20.515552336 +0000 UTC m=+0.968439480 container died 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 06:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e757246de7aea5113aae85e9af597b6f92bf05d85ac58ff2c2180dab203d1e11-merged.mount: Deactivated successfully.
Nov 22 06:05:20 compute-0 podman[284874]: 2025-11-22 06:05:20.602824101 +0000 UTC m=+1.055711245 container remove 062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_blackburn, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:05:20 compute-0 systemd[1]: libpod-conmon-062ccbd1227a4ea0ba8695b469c30072bc80fdebb5a8d4ff9b49e72953434422.scope: Deactivated successfully.
Nov 22 06:05:20 compute-0 sudo[284727]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:20 compute-0 sudo[284917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:20 compute-0 sudo[284917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:20 compute-0 sudo[284917]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:20 compute-0 sudo[284942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:05:20 compute-0 sudo[284942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:20 compute-0 sudo[284942]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:20 compute-0 sudo[284967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:20 compute-0 sudo[284967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:20 compute-0 sudo[284967]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:21 compute-0 sudo[284992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:05:21 compute-0 sudo[284992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:21 compute-0 ceph-mon[75840]: pgmap v1378: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.389201425 +0000 UTC m=+0.035778564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.539819528 +0000 UTC m=+0.186396667 container create 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:05:21 compute-0 systemd[1]: Started libpod-conmon-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope.
Nov 22 06:05:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.665543082 +0000 UTC m=+0.312120221 container init 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.676063816 +0000 UTC m=+0.322640965 container start 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.681029539 +0000 UTC m=+0.327606668 container attach 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 06:05:21 compute-0 gifted_bhabha[285073]: 167 167
Nov 22 06:05:21 compute-0 systemd[1]: libpod-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope: Deactivated successfully.
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.684699078 +0000 UTC m=+0.331276247 container died 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 06:05:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-23ccf179ed92f886c22a143c406d7b9bd7fd6c082825cf3b6d8f92a3c1447efa-merged.mount: Deactivated successfully.
Nov 22 06:05:21 compute-0 podman[285056]: 2025-11-22 06:05:21.728720532 +0000 UTC m=+0.375297651 container remove 437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bhabha, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:05:21 compute-0 systemd[1]: libpod-conmon-437de44b0d2f24f72baefeeac8f421a98fa9002fc2ebcc65b2d76fc457286d1d.scope: Deactivated successfully.
Nov 22 06:05:21 compute-0 podman[285097]: 2025-11-22 06:05:21.929599999 +0000 UTC m=+0.051609230 container create 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:05:21 compute-0 systemd[1]: Started libpod-conmon-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope.
Nov 22 06:05:21 compute-0 podman[285097]: 2025-11-22 06:05:21.9069567 +0000 UTC m=+0.028965941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:05:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:05:22 compute-0 podman[285097]: 2025-11-22 06:05:22.017550717 +0000 UTC m=+0.139559978 container init 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:05:22 compute-0 podman[285097]: 2025-11-22 06:05:22.031181223 +0000 UTC m=+0.153190444 container start 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:05:22 compute-0 podman[285097]: 2025-11-22 06:05:22.035074588 +0000 UTC m=+0.157083849 container attach 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.148 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.172 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.173 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.173 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.174 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.174 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:05:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:05:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90430756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.641 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:05:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/90430756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.867 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.869 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4881MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.870 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.870 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.945 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.946 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:05:22 compute-0 nova_compute[255660]: 2025-11-22 06:05:22.969 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:05:23 compute-0 affectionate_buck[285113]: {
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_id": 1,
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "type": "bluestore"
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     },
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_id": 2,
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "type": "bluestore"
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     },
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_id": 0,
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:         "type": "bluestore"
Nov 22 06:05:23 compute-0 affectionate_buck[285113]:     }
Nov 22 06:05:23 compute-0 affectionate_buck[285113]: }
Nov 22 06:05:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:23 compute-0 systemd[1]: libpod-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Deactivated successfully.
Nov 22 06:05:23 compute-0 systemd[1]: libpod-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Consumed 1.015s CPU time.
Nov 22 06:05:23 compute-0 podman[285097]: 2025-11-22 06:05:23.082281642 +0000 UTC m=+1.204290923 container died 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 06:05:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-44015d6ca1bef3990cca5771d5cb34afd987487437df5bfb748331999afe58a9-merged.mount: Deactivated successfully.
Nov 22 06:05:23 compute-0 podman[285097]: 2025-11-22 06:05:23.162025619 +0000 UTC m=+1.284034870 container remove 5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 06:05:23 compute-0 systemd[1]: libpod-conmon-5268ab376b6410fec6e6bb5ed12c89163e3b606fe01816de02e6aca8efc0c9bf.scope: Deactivated successfully.
Nov 22 06:05:23 compute-0 sudo[284992]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:05:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:05:23 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:23 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8d06012a-5e32-449e-bda6-89961324dc68 does not exist
Nov 22 06:05:23 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9df89602-923c-406e-a568-d05b605db1cd does not exist
Nov 22 06:05:23 compute-0 sudo[285203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:05:23 compute-0 sudo[285203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:23 compute-0 sudo[285203]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:05:23 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108723370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:05:23 compute-0 nova_compute[255660]: 2025-11-22 06:05:23.400 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:05:23 compute-0 sudo[285228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:05:23 compute-0 sudo[285228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:05:23 compute-0 nova_compute[255660]: 2025-11-22 06:05:23.410 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:05:23 compute-0 sudo[285228]: pam_unix(sudo:session): session closed for user root
Nov 22 06:05:23 compute-0 nova_compute[255660]: 2025-11-22 06:05:23.428 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:05:23 compute-0 nova_compute[255660]: 2025-11-22 06:05:23.432 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:05:23 compute-0 nova_compute[255660]: 2025-11-22 06:05:23.433 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:05:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:23 compute-0 ceph-mon[75840]: pgmap v1379: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:23 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:23 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:05:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2108723370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:05:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:26 compute-0 ceph-mon[75840]: pgmap v1380: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:28 compute-0 ceph-mon[75840]: pgmap v1381: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:29 compute-0 nova_compute[255660]: 2025-11-22 06:05:29.411 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:29 compute-0 nova_compute[255660]: 2025-11-22 06:05:29.690 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:30 compute-0 nova_compute[255660]: 2025-11-22 06:05:30.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:30 compute-0 nova_compute[255660]: 2025-11-22 06:05:30.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:30 compute-0 nova_compute[255660]: 2025-11-22 06:05:30.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:05:30 compute-0 nova_compute[255660]: 2025-11-22 06:05:30.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:30 compute-0 ceph-mon[75840]: pgmap v1382: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:31 compute-0 ceph-mon[75840]: pgmap v1383: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:32 compute-0 nova_compute[255660]: 2025-11-22 06:05:32.288 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:32 compute-0 nova_compute[255660]: 2025-11-22 06:05:32.289 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:33 compute-0 nova_compute[255660]: 2025-11-22 06:05:33.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:33 compute-0 nova_compute[255660]: 2025-11-22 06:05:33.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 06:05:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:34 compute-0 ceph-mon[75840]: pgmap v1384: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:34 compute-0 nova_compute[255660]: 2025-11-22 06:05:34.218 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:34 compute-0 nova_compute[255660]: 2025-11-22 06:05:34.417 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:35 compute-0 nova_compute[255660]: 2025-11-22 06:05:35.301 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:36 compute-0 ceph-mon[75840]: pgmap v1385: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:05:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:05:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:36.947 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:05:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.860 255664 DEBUG oslo_concurrency.processutils [None req-f24b0f1e-644b-420a-9281-37ac9adaadd4 044d47622a784618a29823cd785e2e31 5830132eb4c840bd906214f1719ec76f - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:05:37 compute-0 nova_compute[255660]: 2025-11-22 06:05:37.879 255664 DEBUG oslo_concurrency.processutils [None req-f24b0f1e-644b-420a-9281-37ac9adaadd4 044d47622a784618a29823cd785e2e31 5830132eb4c840bd906214f1719ec76f - - default default] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:05:38 compute-0 ceph-mon[75840]: pgmap v1386: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:39 compute-0 ceph-mon[75840]: pgmap v1387: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:40 compute-0 podman[285256]: 2025-11-22 06:05:40.240256329 +0000 UTC m=+0.099593562 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 06:05:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:05:42 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9147 writes, 34K keys, 9147 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9147 writes, 2199 syncs, 4.16 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1822 writes, 5236 keys, 1822 commit groups, 1.0 writes per commit group, ingest: 6.79 MB, 0.01 MB/s
                                           Interval WAL: 1822 writes, 656 syncs, 2.78 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:05:42 compute-0 ceph-mon[75840]: pgmap v1388: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:05:43
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:05:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:05:44 compute-0 ceph-mon[75840]: pgmap v1389: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:05:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:05:44 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:44.671 164618 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '92:e2:92', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '5a:37:45:26:ef:96'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 06:05:44 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:44.672 164618 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 06:05:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:45 compute-0 ceph-mon[75840]: pgmap v1390: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:45 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:05:45.674 164618 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=772af8e6-0f26-443e-a044-9109439e729d, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 06:05:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:05:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:05:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:05:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:05:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:05:47 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3213 syncs, 3.78 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3014 writes, 9902 keys, 3014 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s
                                           Interval WAL: 3014 writes, 1129 syncs, 2.67 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:05:48 compute-0 ceph-mon[75840]: pgmap v1391: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:05:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/4260931621' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:05:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:50 compute-0 ceph-mon[75840]: pgmap v1392: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:50 compute-0 podman[285282]: 2025-11-22 06:05:50.230968006 +0000 UTC m=+0.077647491 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 06:05:50 compute-0 podman[285283]: 2025-11-22 06:05:50.258660092 +0000 UTC m=+0.098062590 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 22 06:05:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:52 compute-0 ceph-mon[75840]: pgmap v1393: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:53 compute-0 ceph-mon[75840]: pgmap v1394: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:05:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:05:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:05:53 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2769 syncs, 3.81 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1633 writes, 3817 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 2.01 MB, 0.00 MB/s
                                           Interval WAL: 1633 writes, 528 syncs, 3.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:05:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:56 compute-0 ceph-mon[75840]: pgmap v1395: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:56 compute-0 ceph-mgr[76134]: [devicehealth INFO root] Check health
Nov 22 06:05:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:58 compute-0 ceph-mon[75840]: pgmap v1396: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:05:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:05:59 compute-0 ceph-mon[75840]: pgmap v1397: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:02 compute-0 ceph-mon[75840]: pgmap v1398: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:03 compute-0 ceph-mon[75840]: pgmap v1399: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:06 compute-0 ceph-mon[75840]: pgmap v1400: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:08 compute-0 ceph-mon[75840]: pgmap v1401: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:09 compute-0 ceph-mon[75840]: pgmap v1402: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:11 compute-0 podman[285319]: 2025-11-22 06:06:11.304327131 +0000 UTC m=+0.150672647 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 06:06:12 compute-0 ceph-mon[75840]: pgmap v1403: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:13 compute-0 ceph-mon[75840]: pgmap v1404: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:16 compute-0 ceph-mon[75840]: pgmap v1405: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:17 compute-0 ceph-mon[75840]: pgmap v1406: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:20 compute-0 ceph-mon[75840]: pgmap v1407: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:21 compute-0 podman[285345]: 2025-11-22 06:06:21.212053496 +0000 UTC m=+0.069054110 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 06:06:21 compute-0 ceph-mon[75840]: pgmap v1408: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:21 compute-0 podman[285346]: 2025-11-22 06:06:21.245797244 +0000 UTC m=+0.087879827 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.169 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.170 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.171 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:06:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:06:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999093216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.647 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:06:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2999093216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.846 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.847 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4957MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.848 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:06:22 compute-0 nova_compute[255660]: 2025-11-22 06:06:22.848 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.057 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.058 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:06:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.168 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing inventories for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.296 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating ProviderTree inventory for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.297 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Updating inventory in ProviderTree for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.316 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing aggregate associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.351 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Refreshing trait associations for resource provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60, traits: HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE42,HW_CPU_X86_AVX,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SHA,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.374 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:06:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:23 compute-0 sudo[285405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:23 compute-0 sudo[285405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:23 compute-0 sudo[285405]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:23 compute-0 sudo[285449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:06:23 compute-0 sudo[285449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:23 compute-0 sudo[285449]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:23 compute-0 ceph-mon[75840]: pgmap v1409: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:23 compute-0 sudo[285474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:23 compute-0 sudo[285474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:23 compute-0 sudo[285474]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:23 compute-0 sudo[285499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 06:06:23 compute-0 sudo[285499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:06:23 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836846607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.849 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.856 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.880 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.885 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:06:23 compute-0 nova_compute[255660]: 2025-11-22 06:06:23.886 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:06:24 compute-0 sudo[285499]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:06:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:06:24 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:24 compute-0 sudo[285545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:24 compute-0 sudo[285545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:24 compute-0 sudo[285545]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:24 compute-0 sudo[285570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:06:24 compute-0 sudo[285570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:24 compute-0 sudo[285570]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:24 compute-0 sudo[285595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:24 compute-0 sudo[285595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:24 compute-0 sudo[285595]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:24 compute-0 sudo[285620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:06:24 compute-0 sudo[285620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1836846607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:06:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:24 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:25 compute-0 sudo[285620]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev cab87328-7ee6-4d4b-a410-a01ea3947ae5 does not exist
Nov 22 06:06:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 81bd2fab-d606-46bc-ae8e-5e278710b98d does not exist
Nov 22 06:06:25 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev e4d9c633-e346-4c1c-a2c3-edbb778a6a1c does not exist
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:06:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:06:25 compute-0 sudo[285675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:25 compute-0 sudo[285675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:25 compute-0 sudo[285675]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:25 compute-0 sudo[285700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:06:25 compute-0 sudo[285700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:25 compute-0 sudo[285700]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:25 compute-0 sudo[285725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:25 compute-0 sudo[285725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:25 compute-0 sudo[285725]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:25 compute-0 sudo[285750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:06:25 compute-0 sudo[285750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:25 compute-0 ceph-mon[75840]: pgmap v1410: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:06:25 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.006200935 +0000 UTC m=+0.072443592 container create 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:06:26 compute-0 systemd[1]: Started libpod-conmon-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope.
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:25.978819917 +0000 UTC m=+0.045062674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.103050851 +0000 UTC m=+0.169293598 container init 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.114522769 +0000 UTC m=+0.180765466 container start 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.11864466 +0000 UTC m=+0.184887417 container attach 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 06:06:26 compute-0 nostalgic_blackwell[285834]: 167 167
Nov 22 06:06:26 compute-0 systemd[1]: libpod-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope: Deactivated successfully.
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.122732771 +0000 UTC m=+0.188975438 container died 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 06:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc73c0129d73db71efa9c3cd3cf319d09c649a26ffea31ca531fc1c17c496a42-merged.mount: Deactivated successfully.
Nov 22 06:06:26 compute-0 podman[285817]: 2025-11-22 06:06:26.174442942 +0000 UTC m=+0.240685629 container remove 8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 06:06:26 compute-0 systemd[1]: libpod-conmon-8a95bff60f7929d9c4a7d85ca0ee492fd1186777a0b371574e2affd1b2bc3a09.scope: Deactivated successfully.
Nov 22 06:06:26 compute-0 podman[285858]: 2025-11-22 06:06:26.393156599 +0000 UTC m=+0.073984972 container create 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 06:06:26 compute-0 systemd[1]: Started libpod-conmon-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope.
Nov 22 06:06:26 compute-0 podman[285858]: 2025-11-22 06:06:26.367407756 +0000 UTC m=+0.048236199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:26 compute-0 podman[285858]: 2025-11-22 06:06:26.500724014 +0000 UTC m=+0.181552437 container init 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:06:26 compute-0 podman[285858]: 2025-11-22 06:06:26.514435503 +0000 UTC m=+0.195263876 container start 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 06:06:26 compute-0 podman[285858]: 2025-11-22 06:06:26.522811949 +0000 UTC m=+0.203640382 container attach 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:06:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:27 compute-0 ceph-mon[75840]: pgmap v1411: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:27 compute-0 festive_bell[285874]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:06:27 compute-0 festive_bell[285874]: --> relative data size: 1.0
Nov 22 06:06:27 compute-0 festive_bell[285874]: --> All data devices are unavailable
Nov 22 06:06:27 compute-0 systemd[1]: libpod-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Deactivated successfully.
Nov 22 06:06:27 compute-0 podman[285858]: 2025-11-22 06:06:27.626263307 +0000 UTC m=+1.307091670 container died 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 06:06:27 compute-0 systemd[1]: libpod-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Consumed 1.066s CPU time.
Nov 22 06:06:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b52655ec86b5bf68f830604e44004ada5ef487bad27d645d3d4478901755ec3c-merged.mount: Deactivated successfully.
Nov 22 06:06:27 compute-0 podman[285858]: 2025-11-22 06:06:27.691332858 +0000 UTC m=+1.372161201 container remove 9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 06:06:27 compute-0 systemd[1]: libpod-conmon-9bfb2796fbaca0a4ee9ffe80622f1f00a973aeb64766c17b28021a25d1164ea9.scope: Deactivated successfully.
Nov 22 06:06:27 compute-0 sudo[285750]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:27 compute-0 sudo[285919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:27 compute-0 sudo[285919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:27 compute-0 sudo[285919]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:27 compute-0 sudo[285944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:06:27 compute-0 sudo[285944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:27 compute-0 sudo[285944]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:27 compute-0 sudo[285969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:27 compute-0 sudo[285969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:27 compute-0 sudo[285969]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:27 compute-0 sudo[285994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:06:27 compute-0 sudo[285994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.398595364 +0000 UTC m=+0.045963179 container create ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:06:28 compute-0 systemd[1]: Started libpod-conmon-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope.
Nov 22 06:06:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.380140476 +0000 UTC m=+0.027508301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.48169199 +0000 UTC m=+0.129059795 container init ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 06:06:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.492171982 +0000 UTC m=+0.139539797 container start ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.495482551 +0000 UTC m=+0.142850356 container attach ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 06:06:28 compute-0 systemd[1]: libpod-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope: Deactivated successfully.
Nov 22 06:06:28 compute-0 objective_leakey[286075]: 167 167
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.499015946 +0000 UTC m=+0.146383761 container died ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-761290aa060ba6a09612d6c375b5f465e203076c298d8c7da7b2ad47552b4248-merged.mount: Deactivated successfully.
Nov 22 06:06:28 compute-0 podman[286059]: 2025-11-22 06:06:28.555831285 +0000 UTC m=+0.203199130 container remove ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_leakey, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:06:28 compute-0 systemd[1]: libpod-conmon-ee776a9add1b3d5414275752b045480255d8e42e85832a529b92dbe1b9a86fd2.scope: Deactivated successfully.
Nov 22 06:06:28 compute-0 podman[286098]: 2025-11-22 06:06:28.774329876 +0000 UTC m=+0.067369914 container create ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 06:06:28 compute-0 systemd[1]: Started libpod-conmon-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope.
Nov 22 06:06:28 compute-0 podman[286098]: 2025-11-22 06:06:28.746555118 +0000 UTC m=+0.039595216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:28 compute-0 podman[286098]: 2025-11-22 06:06:28.872108068 +0000 UTC m=+0.165148146 container init ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 06:06:28 compute-0 podman[286098]: 2025-11-22 06:06:28.883442482 +0000 UTC m=+0.176482530 container start ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:06:28 compute-0 podman[286098]: 2025-11-22 06:06:28.88890852 +0000 UTC m=+0.181948558 container attach ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 22 06:06:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:29 compute-0 ceph-mon[75840]: pgmap v1412: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:29 compute-0 elastic_hellman[286114]: {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     "0": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "devices": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "/dev/loop3"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             ],
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_name": "ceph_lv0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_size": "21470642176",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "name": "ceph_lv0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "tags": {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_name": "ceph",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.crush_device_class": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.encrypted": "0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_id": "0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.vdo": "0"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             },
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "vg_name": "ceph_vg0"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         }
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     ],
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     "1": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "devices": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "/dev/loop4"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             ],
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_name": "ceph_lv1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_size": "21470642176",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "name": "ceph_lv1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "tags": {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_name": "ceph",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.crush_device_class": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.encrypted": "0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_id": "1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.vdo": "0"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             },
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "vg_name": "ceph_vg1"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         }
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     ],
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     "2": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "devices": [
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "/dev/loop5"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             ],
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_name": "ceph_lv2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_size": "21470642176",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "name": "ceph_lv2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "tags": {
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.cluster_name": "ceph",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.crush_device_class": "",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.encrypted": "0",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osd_id": "2",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:                 "ceph.vdo": "0"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             },
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "type": "block",
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:             "vg_name": "ceph_vg2"
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:         }
Nov 22 06:06:29 compute-0 elastic_hellman[286114]:     ]
Nov 22 06:06:29 compute-0 elastic_hellman[286114]: }
Nov 22 06:06:29 compute-0 systemd[1]: libpod-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope: Deactivated successfully.
Nov 22 06:06:29 compute-0 podman[286098]: 2025-11-22 06:06:29.693404771 +0000 UTC m=+0.986444789 container died ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:06:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8bdd742a81fc5c0640699bf175520ae857c4628e8dfa7d17fc84e0892202179-merged.mount: Deactivated successfully.
Nov 22 06:06:29 compute-0 podman[286098]: 2025-11-22 06:06:29.78101582 +0000 UTC m=+1.074055838 container remove ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:06:29 compute-0 systemd[1]: libpod-conmon-ecfc86f4ce76065f6d9725161b958167d85483194a323e25f800a72650eaca6e.scope: Deactivated successfully.
Nov 22 06:06:29 compute-0 sudo[285994]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:29 compute-0 nova_compute[255660]: 2025-11-22 06:06:29.887 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:29 compute-0 sudo[286135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:29 compute-0 sudo[286135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:29 compute-0 sudo[286135]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:30 compute-0 sudo[286160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:06:30 compute-0 sudo[286160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:30 compute-0 sudo[286160]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:30 compute-0 sudo[286185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:30 compute-0 sudo[286185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:30 compute-0 sudo[286185]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:30 compute-0 sudo[286210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:06:30 compute-0 sudo[286210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.600462314 +0000 UTC m=+0.052411961 container create 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 06:06:30 compute-0 systemd[1]: Started libpod-conmon-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope.
Nov 22 06:06:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.572622845 +0000 UTC m=+0.024572552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.683339924 +0000 UTC m=+0.135289561 container init 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.693532599 +0000 UTC m=+0.145482216 container start 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.697213028 +0000 UTC m=+0.149162735 container attach 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 06:06:30 compute-0 sleepy_nash[286291]: 167 167
Nov 22 06:06:30 compute-0 systemd[1]: libpod-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope: Deactivated successfully.
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.700548058 +0000 UTC m=+0.152497685 container died 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0191847ed57a93b7dd3737107650b20c1d241cd6f07fd8c2bc3ca5cec98e0a37-merged.mount: Deactivated successfully.
Nov 22 06:06:30 compute-0 podman[286275]: 2025-11-22 06:06:30.74001272 +0000 UTC m=+0.191962377 container remove 2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 06:06:30 compute-0 systemd[1]: libpod-conmon-2ebf6a690de63812e19f90e6bb453d003134fd4cc0e6d3df8dfa2a8f64b1a8cd.scope: Deactivated successfully.
Nov 22 06:06:30 compute-0 podman[286314]: 2025-11-22 06:06:30.961171912 +0000 UTC m=+0.061400473 container create 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:06:31 compute-0 systemd[1]: Started libpod-conmon-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope.
Nov 22 06:06:31 compute-0 podman[286314]: 2025-11-22 06:06:30.936750415 +0000 UTC m=+0.036979056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:06:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:06:31 compute-0 podman[286314]: 2025-11-22 06:06:31.057788523 +0000 UTC m=+0.158017134 container init 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:06:31 compute-0 podman[286314]: 2025-11-22 06:06:31.072953361 +0000 UTC m=+0.173181942 container start 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 06:06:31 compute-0 podman[286314]: 2025-11-22 06:06:31.077681958 +0000 UTC m=+0.177910549 container attach 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:06:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:31 compute-0 nova_compute[255660]: 2025-11-22 06:06:31.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:31 compute-0 nova_compute[255660]: 2025-11-22 06:06:31.127 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:31 compute-0 nova_compute[255660]: 2025-11-22 06:06:31.128 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:06:32 compute-0 nova_compute[255660]: 2025-11-22 06:06:32.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]: {
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_id": 1,
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "type": "bluestore"
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     },
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_id": 2,
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "type": "bluestore"
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     },
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_id": 0,
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:         "type": "bluestore"
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]:     }
Nov 22 06:06:32 compute-0 agitated_hodgkin[286331]: }
Nov 22 06:06:32 compute-0 ceph-mon[75840]: pgmap v1413: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:32 compute-0 systemd[1]: libpod-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Deactivated successfully.
Nov 22 06:06:32 compute-0 systemd[1]: libpod-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Consumed 1.150s CPU time.
Nov 22 06:06:32 compute-0 podman[286364]: 2025-11-22 06:06:32.252807615 +0000 UTC m=+0.027879802 container died 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d7f35d98b196e9a0d68e5e778957a27b7db5c48263d181a0e4808c37319add8-merged.mount: Deactivated successfully.
Nov 22 06:06:32 compute-0 podman[286364]: 2025-11-22 06:06:32.31280045 +0000 UTC m=+0.087872647 container remove 4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 06:06:32 compute-0 systemd[1]: libpod-conmon-4556fc0d43076b8561617fd744912f7ae46aba97f7ddc377f4051e5e6e4772bd.scope: Deactivated successfully.
Nov 22 06:06:32 compute-0 sudo[286210]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:06:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:32 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:06:32 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:32 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c3ce0438-ed29-4266-ae65-a56a28c5bac2 does not exist
Nov 22 06:06:32 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 3dc6c435-28e8-462c-9f94-79e4b8a37618 does not exist
Nov 22 06:06:32 compute-0 sudo[286379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:06:32 compute-0 sudo[286379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:32 compute-0 sudo[286379]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:32 compute-0 sudo[286404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:06:32 compute-0 sudo[286404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:06:32 compute-0 sudo[286404]: pam_unix(sudo:session): session closed for user root
Nov 22 06:06:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:33 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:06:33 compute-0 ceph-mon[75840]: pgmap v1414: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:34 compute-0 nova_compute[255660]: 2025-11-22 06:06:34.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:35 compute-0 nova_compute[255660]: 2025-11-22 06:06:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:35 compute-0 nova_compute[255660]: 2025-11-22 06:06:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:35 compute-0 ceph-mon[75840]: pgmap v1415: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.948 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:06:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.949 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:06:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:06:36.949 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:06:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:37 compute-0 ceph-mon[75840]: pgmap v1416: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:39 compute-0 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:06:39 compute-0 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:06:39 compute-0 nova_compute[255660]: 2025-11-22 06:06:39.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:06:39 compute-0 nova_compute[255660]: 2025-11-22 06:06:39.151 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:06:39 compute-0 ceph-mon[75840]: pgmap v1417: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:41 compute-0 ceph-mon[75840]: pgmap v1418: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:42 compute-0 podman[286429]: 2025-11-22 06:06:42.29393287 +0000 UTC m=+0.134678595 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:43 compute-0 ceph-mon[75840]: pgmap v1419: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:06:43
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta']
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:06:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:06:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:06:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:45 compute-0 ceph-mon[75840]: pgmap v1420: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:47 compute-0 ceph-mon[75840]: pgmap v1421: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:06:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:06:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:06:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:06:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:06:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/44305888' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:06:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:49 compute-0 ceph-mon[75840]: pgmap v1422: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:52 compute-0 ceph-mon[75840]: pgmap v1423: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:52 compute-0 podman[286456]: 2025-11-22 06:06:52.376877081 +0000 UTC m=+0.083485607 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 06:06:52 compute-0 podman[286457]: 2025-11-22 06:06:52.418194393 +0000 UTC m=+0.117072472 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:06:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:06:53 compute-0 ceph-mon[75840]: pgmap v1424: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:55 compute-0 ceph-mon[75840]: pgmap v1425: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:57 compute-0 ceph-mon[75840]: pgmap v1426: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:06:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:06:59 compute-0 ceph-mon[75840]: pgmap v1427: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:01 compute-0 ceph-mon[75840]: pgmap v1428: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:03 compute-0 ceph-mon[75840]: pgmap v1429: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.501127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623501191, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1286, "num_deletes": 250, "total_data_size": 1982824, "memory_usage": 2016104, "flush_reason": "Manual Compaction"}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623515276, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1166888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31035, "largest_seqno": 32320, "table_properties": {"data_size": 1162319, "index_size": 2029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12080, "raw_average_key_size": 20, "raw_value_size": 1152293, "raw_average_value_size": 1969, "num_data_blocks": 93, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791494, "oldest_key_time": 1763791494, "file_creation_time": 1763791623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14208 microseconds, and 7655 cpu microseconds.
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.515339) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1166888 bytes OK
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.515365) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517217) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517238) EVENT_LOG_v1 {"time_micros": 1763791623517231, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.517259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1977052, prev total WAL file size 1977052, number of live WAL files 2.
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.518330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303034' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1139KB)], [65(9954KB)]
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623518369, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11359837, "oldest_snapshot_seqno": -1}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6229 keys, 8835159 bytes, temperature: kUnknown
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623580162, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8835159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8794059, "index_size": 24414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 156630, "raw_average_key_size": 25, "raw_value_size": 8682984, "raw_average_value_size": 1393, "num_data_blocks": 1001, "num_entries": 6229, "num_filter_entries": 6229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.580458) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8835159 bytes
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.581917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.6 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(17.3) write-amplify(7.6) OK, records in: 6682, records dropped: 453 output_compression: NoCompression
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.581946) EVENT_LOG_v1 {"time_micros": 1763791623581932, "job": 36, "event": "compaction_finished", "compaction_time_micros": 61882, "compaction_time_cpu_micros": 39999, "output_level": 6, "num_output_files": 1, "total_output_size": 8835159, "num_input_records": 6682, "num_output_records": 6229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623582430, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791623585756, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.518275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:03 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:07:03.585839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:07:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:05 compute-0 ceph-mon[75840]: pgmap v1430: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:07 compute-0 ceph-mon[75840]: pgmap v1431: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:07 compute-0 sshd-session[286495]: Invalid user validator from 80.94.92.166 port 59574
Nov 22 06:07:07 compute-0 sshd-session[286495]: Connection closed by invalid user validator 80.94.92.166 port 59574 [preauth]
Nov 22 06:07:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:09 compute-0 ceph-mon[75840]: pgmap v1432: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:11 compute-0 ceph-mon[75840]: pgmap v1433: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:13 compute-0 ceph-mon[75840]: pgmap v1434: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:13 compute-0 podman[286497]: 2025-11-22 06:07:13.270755956 +0000 UTC m=+0.125829938 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 06:07:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:15 compute-0 ceph-mon[75840]: pgmap v1435: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 22 06:07:17 compute-0 ceph-mon[75840]: pgmap v1436: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Nov 22 06:07:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 22 06:07:19 compute-0 ceph-mon[75840]: pgmap v1437: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 22 06:07:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 22 06:07:21 compute-0 ceph-mon[75840]: pgmap v1438: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.167 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.168 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:07:23 compute-0 podman[286524]: 2025-11-22 06:07:23.198451229 +0000 UTC m=+0.053744717 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 06:07:23 compute-0 podman[286525]: 2025-11-22 06:07:23.243571775 +0000 UTC m=+0.084683761 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 22 06:07:23 compute-0 ceph-mon[75840]: pgmap v1439: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:07:23 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3293739466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.572 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.746 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.747 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.748 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.748 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.832 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.833 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:07:23 compute-0 nova_compute[255660]: 2025-11-22 06:07:23.854 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:07:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:07:24 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942960877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:07:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3293739466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:07:24 compute-0 nova_compute[255660]: 2025-11-22 06:07:24.294 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:07:24 compute-0 nova_compute[255660]: 2025-11-22 06:07:24.300 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:07:24 compute-0 nova_compute[255660]: 2025-11-22 06:07:24.337 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:07:24 compute-0 nova_compute[255660]: 2025-11-22 06:07:24.340 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:07:24 compute-0 nova_compute[255660]: 2025-11-22 06:07:24.341 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:07:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/942960877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:07:25 compute-0 ceph-mon[75840]: pgmap v1440: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:27 compute-0 ceph-mon[75840]: pgmap v1441: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 06:07:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 22 06:07:29 compute-0 ceph-mon[75840]: pgmap v1442: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 22 06:07:29 compute-0 nova_compute[255660]: 2025-11-22 06:07:29.342 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 06:07:31 compute-0 ceph-mon[75840]: pgmap v1443: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 22 06:07:32 compute-0 nova_compute[255660]: 2025-11-22 06:07:32.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:32 compute-0 nova_compute[255660]: 2025-11-22 06:07:32.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:32 compute-0 sudo[286605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:32 compute-0 sudo[286605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:32 compute-0 sudo[286605]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:32 compute-0 sudo[286630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:07:32 compute-0 sudo[286630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:32 compute-0 sudo[286630]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:32 compute-0 sudo[286655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:32 compute-0 sudo[286655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:32 compute-0 sudo[286655]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:32 compute-0 sudo[286680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:07:32 compute-0 sudo[286680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:33 compute-0 nova_compute[255660]: 2025-11-22 06:07:33.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:33 compute-0 nova_compute[255660]: 2025-11-22 06:07:33.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:07:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 22 06:07:33 compute-0 ceph-mon[75840]: pgmap v1444: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 22 06:07:33 compute-0 sudo[286680]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 64058299-def9-4639-a731-5a572996b6d9 does not exist
Nov 22 06:07:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 8981e949-e5ca-45db-add6-080fb7bc4736 does not exist
Nov 22 06:07:33 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a4211a8a-df16-4701-a55d-a0b0104813dd does not exist
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:07:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:07:33 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:07:33 compute-0 sudo[286736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:33 compute-0 sudo[286736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:33 compute-0 sudo[286736]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:33 compute-0 sudo[286761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:07:33 compute-0 sudo[286761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:33 compute-0 sudo[286761]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:33 compute-0 sudo[286786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:33 compute-0 sudo[286786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:33 compute-0 sudo[286786]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:33 compute-0 sudo[286811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:07:33 compute-0 sudo[286811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:34 compute-0 nova_compute[255660]: 2025-11-22 06:07:34.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:34 compute-0 nova_compute[255660]: 2025-11-22 06:07:34.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:07:34 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.449369606 +0000 UTC m=+0.105555312 container create 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.384936712 +0000 UTC m=+0.041122468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:34 compute-0 systemd[1]: Started libpod-conmon-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope.
Nov 22 06:07:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.761010924 +0000 UTC m=+0.417196680 container init 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.774030224 +0000 UTC m=+0.430215930 container start 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 22 06:07:34 compute-0 clever_keldysh[286893]: 167 167
Nov 22 06:07:34 compute-0 systemd[1]: libpod-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope: Deactivated successfully.
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.808616045 +0000 UTC m=+0.464801811 container attach 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:07:34 compute-0 podman[286877]: 2025-11-22 06:07:34.810723891 +0000 UTC m=+0.466909597 container died 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 06:07:35 compute-0 nova_compute[255660]: 2025-11-22 06:07:35.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a7c48b9d92346edc63f62565474e77c18a5b70b6213243e6f5b99f70681096-merged.mount: Deactivated successfully.
Nov 22 06:07:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:35 compute-0 podman[286877]: 2025-11-22 06:07:35.462824602 +0000 UTC m=+1.119010308 container remove 7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_keldysh, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 06:07:35 compute-0 systemd[1]: libpod-conmon-7625163975f6bd6da97c2c7db1b4a22e039dc7bf242e76113fee9eee5d204a88.scope: Deactivated successfully.
Nov 22 06:07:35 compute-0 ceph-mon[75840]: pgmap v1445: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:35 compute-0 podman[286920]: 2025-11-22 06:07:35.666868283 +0000 UTC m=+0.036192024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:35 compute-0 podman[286920]: 2025-11-22 06:07:35.823674623 +0000 UTC m=+0.192998384 container create 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:07:35 compute-0 systemd[1]: Started libpod-conmon-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope.
Nov 22 06:07:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:36 compute-0 podman[286920]: 2025-11-22 06:07:36.062613185 +0000 UTC m=+0.431937036 container init 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 06:07:36 compute-0 podman[286920]: 2025-11-22 06:07:36.07317741 +0000 UTC m=+0.442501161 container start 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:07:36 compute-0 nova_compute[255660]: 2025-11-22 06:07:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:36 compute-0 podman[286920]: 2025-11-22 06:07:36.16757561 +0000 UTC m=+0.536899371 container attach 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:07:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.950 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:07:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:07:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:07:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:07:37 compute-0 relaxed_tharp[286936]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:07:37 compute-0 relaxed_tharp[286936]: --> relative data size: 1.0
Nov 22 06:07:37 compute-0 relaxed_tharp[286936]: --> All data devices are unavailable
Nov 22 06:07:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:37 compute-0 systemd[1]: libpod-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Deactivated successfully.
Nov 22 06:07:37 compute-0 systemd[1]: libpod-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Consumed 1.053s CPU time.
Nov 22 06:07:37 compute-0 podman[286965]: 2025-11-22 06:07:37.236298073 +0000 UTC m=+0.046088712 container died 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:07:37 compute-0 ceph-mon[75840]: pgmap v1446: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-19f970fc799deabd537b5ff93ee5e629b32b56e68697854daf996bf678f4abc7-merged.mount: Deactivated successfully.
Nov 22 06:07:37 compute-0 podman[286965]: 2025-11-22 06:07:37.30494578 +0000 UTC m=+0.114736459 container remove 63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 06:07:37 compute-0 systemd[1]: libpod-conmon-63b255b59c6e572163f294ea15dc15ef46668c8dfa4a44001782175776b990d4.scope: Deactivated successfully.
Nov 22 06:07:37 compute-0 sudo[286811]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:37 compute-0 sudo[286980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:37 compute-0 sudo[286980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:37 compute-0 sudo[286980]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:37 compute-0 sudo[287005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:07:37 compute-0 sudo[287005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:37 compute-0 sudo[287005]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:37 compute-0 sudo[287030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:37 compute-0 sudo[287030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:37 compute-0 sudo[287030]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:37 compute-0 sudo[287055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:07:37 compute-0 sudo[287055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.061343509 +0000 UTC m=+0.059475953 container create 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:07:38 compute-0 systemd[1]: Started libpod-conmon-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope.
Nov 22 06:07:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.041650639 +0000 UTC m=+0.039783083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.138966337 +0000 UTC m=+0.137098781 container init 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.148032332 +0000 UTC m=+0.146164766 container start 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.151718451 +0000 UTC m=+0.149850885 container attach 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:07:38 compute-0 systemd[1]: libpod-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope: Deactivated successfully.
Nov 22 06:07:38 compute-0 determined_nash[287137]: 167 167
Nov 22 06:07:38 compute-0 conmon[287137]: conmon 00a0fa0788957f67c489 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope/container/memory.events
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.155324498 +0000 UTC m=+0.153456952 container died 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 06:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1649ea9195e6e408501d8e4a6e781008a15396d8c9f5330732b8ab2f8bb2cab-merged.mount: Deactivated successfully.
Nov 22 06:07:38 compute-0 podman[287121]: 2025-11-22 06:07:38.202458436 +0000 UTC m=+0.200590870 container remove 00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:07:38 compute-0 systemd[1]: libpod-conmon-00a0fa0788957f67c48923643354af3b60e249238851c76395c31ea56eddb7ff.scope: Deactivated successfully.
Nov 22 06:07:38 compute-0 podman[287160]: 2025-11-22 06:07:38.401305448 +0000 UTC m=+0.046777140 container create 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:07:38 compute-0 systemd[1]: Started libpod-conmon-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope.
Nov 22 06:07:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:38 compute-0 podman[287160]: 2025-11-22 06:07:38.381809054 +0000 UTC m=+0.027280796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:38 compute-0 podman[287160]: 2025-11-22 06:07:38.502683587 +0000 UTC m=+0.148155299 container init 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:07:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:38 compute-0 podman[287160]: 2025-11-22 06:07:38.510895178 +0000 UTC m=+0.156366870 container start 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 06:07:38 compute-0 podman[287160]: 2025-11-22 06:07:38.54180776 +0000 UTC m=+0.187279452 container attach 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:07:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:39 compute-0 ceph-mon[75840]: pgmap v1447: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]: {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     "0": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "devices": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "/dev/loop3"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             ],
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_name": "ceph_lv0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_size": "21470642176",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "name": "ceph_lv0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "tags": {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_name": "ceph",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.crush_device_class": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.encrypted": "0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_id": "0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.vdo": "0"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             },
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "vg_name": "ceph_vg0"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         }
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     ],
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     "1": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "devices": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "/dev/loop4"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             ],
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_name": "ceph_lv1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_size": "21470642176",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "name": "ceph_lv1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "tags": {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_name": "ceph",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.crush_device_class": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.encrypted": "0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_id": "1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.vdo": "0"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             },
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "vg_name": "ceph_vg1"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         }
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     ],
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     "2": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "devices": [
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "/dev/loop5"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             ],
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_name": "ceph_lv2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_size": "21470642176",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "name": "ceph_lv2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "tags": {
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.cluster_name": "ceph",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.crush_device_class": "",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.encrypted": "0",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osd_id": "2",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:                 "ceph.vdo": "0"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             },
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "type": "block",
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:             "vg_name": "ceph_vg2"
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:         }
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]:     ]
Nov 22 06:07:39 compute-0 vigorous_nightingale[287176]: }
Nov 22 06:07:39 compute-0 systemd[1]: libpod-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope: Deactivated successfully.
Nov 22 06:07:39 compute-0 podman[287160]: 2025-11-22 06:07:39.306308326 +0000 UTC m=+0.951780048 container died 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ea615c18abefc98cb5822405b920e1e6c8fb311c025e44fe9a577b1eb06a48-merged.mount: Deactivated successfully.
Nov 22 06:07:39 compute-0 podman[287160]: 2025-11-22 06:07:39.36107774 +0000 UTC m=+1.006549432 container remove 5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 06:07:39 compute-0 systemd[1]: libpod-conmon-5e962066dda6e6984729b4807ba7f2a83952f7cef9dec5fd7717e8d8b55e6977.scope: Deactivated successfully.
Nov 22 06:07:39 compute-0 sudo[287055]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:39 compute-0 sudo[287197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:39 compute-0 sudo[287197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:39 compute-0 sudo[287197]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:39 compute-0 sudo[287222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:07:39 compute-0 sudo[287222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:39 compute-0 sudo[287222]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:39 compute-0 sudo[287247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:39 compute-0 sudo[287247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:39 compute-0 sudo[287247]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:39 compute-0 sudo[287272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:07:39 compute-0 sudo[287272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.131044983 +0000 UTC m=+0.055245449 container create 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 06:07:40 compute-0 nova_compute[255660]: 2025-11-22 06:07:40.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:07:40 compute-0 nova_compute[255660]: 2025-11-22 06:07:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:07:40 compute-0 nova_compute[255660]: 2025-11-22 06:07:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:07:40 compute-0 nova_compute[255660]: 2025-11-22 06:07:40.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:07:40 compute-0 systemd[1]: Started libpod-conmon-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope.
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.10386824 +0000 UTC m=+0.028068797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.222088913 +0000 UTC m=+0.146289429 container init 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.233586532 +0000 UTC m=+0.157786998 container start 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.237541619 +0000 UTC m=+0.161742125 container attach 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:07:40 compute-0 laughing_hertz[287354]: 167 167
Nov 22 06:07:40 compute-0 systemd[1]: libpod-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope: Deactivated successfully.
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.239784829 +0000 UTC m=+0.163985335 container died 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:07:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe1b99b8a7fb3a5a26391214e50d0b8a6e84bc0b6406897ebe2b261fce3195e-merged.mount: Deactivated successfully.
Nov 22 06:07:40 compute-0 podman[287338]: 2025-11-22 06:07:40.285601212 +0000 UTC m=+0.209801718 container remove 3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:07:40 compute-0 systemd[1]: libpod-conmon-3f877bedf198e8105c6bcd33fe3b3ac86616be7d39a95a6fe300d838d3ebb8c0.scope: Deactivated successfully.
Nov 22 06:07:40 compute-0 podman[287376]: 2025-11-22 06:07:40.511391058 +0000 UTC m=+0.066640204 container create f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 06:07:40 compute-0 systemd[1]: Started libpod-conmon-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope.
Nov 22 06:07:40 compute-0 podman[287376]: 2025-11-22 06:07:40.483397656 +0000 UTC m=+0.038646842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:07:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:07:40 compute-0 podman[287376]: 2025-11-22 06:07:40.606339974 +0000 UTC m=+0.161589130 container init f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 22 06:07:40 compute-0 podman[287376]: 2025-11-22 06:07:40.619502999 +0000 UTC m=+0.174752125 container start f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:07:40 compute-0 podman[287376]: 2025-11-22 06:07:40.623679511 +0000 UTC m=+0.178928657 container attach f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:07:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:41 compute-0 ceph-mon[75840]: pgmap v1448: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]: {
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_id": 1,
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "type": "bluestore"
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     },
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_id": 2,
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "type": "bluestore"
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     },
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_id": 0,
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:         "type": "bluestore"
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]:     }
Nov 22 06:07:41 compute-0 unruffled_knuth[287393]: }
Nov 22 06:07:41 compute-0 systemd[1]: libpod-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Deactivated successfully.
Nov 22 06:07:41 compute-0 systemd[1]: libpod-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Consumed 1.071s CPU time.
Nov 22 06:07:41 compute-0 podman[287426]: 2025-11-22 06:07:41.743384136 +0000 UTC m=+0.036531044 container died f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 06:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89d3355926c11c684a14c08881a1e9cc8e7ebffdb8e39641364dcec93a39e2b-merged.mount: Deactivated successfully.
Nov 22 06:07:41 compute-0 podman[287426]: 2025-11-22 06:07:41.796336192 +0000 UTC m=+0.089483110 container remove f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 06:07:41 compute-0 systemd[1]: libpod-conmon-f2d2dabc306e10fdb1714b46be3d34101c52ef1ed39446c39109a8039febcb45.scope: Deactivated successfully.
Nov 22 06:07:41 compute-0 sudo[287272]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:07:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:41 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:07:41 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev af4cdb90-ac76-4a5c-9dbc-823175df7b23 does not exist
Nov 22 06:07:41 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 69596ee6-99ff-485b-9de9-4126da40d97b does not exist
Nov 22 06:07:41 compute-0 sudo[287442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:07:41 compute-0 sudo[287442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:41 compute-0 sudo[287442]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:41 compute-0 sudo[287467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:07:42 compute-0 sudo[287467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:07:42 compute-0 sudo[287467]: pam_unix(sudo:session): session closed for user root
Nov 22 06:07:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:42 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:07:43
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'backups', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:07:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:07:43 compute-0 ceph-mon[75840]: pgmap v1449: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:07:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:07:44 compute-0 podman[287492]: 2025-11-22 06:07:44.263700518 +0000 UTC m=+0.110469183 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 06:07:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:45 compute-0 ceph-mon[75840]: pgmap v1450: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:47 compute-0 ceph-mon[75840]: pgmap v1451: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:07:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:07:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:07:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:07:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:07:48 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/884860766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:07:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:49 compute-0 ceph-mon[75840]: pgmap v1452: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:51 compute-0 ceph-mon[75840]: pgmap v1453: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:07:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:07:53 compute-0 ceph-mon[75840]: pgmap v1454: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:54 compute-0 podman[287522]: 2025-11-22 06:07:54.238693415 +0000 UTC m=+0.085572404 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:07:54 compute-0 podman[287523]: 2025-11-22 06:07:54.292887613 +0000 UTC m=+0.130613996 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 06:07:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:55 compute-0 ceph-mon[75840]: pgmap v1455: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:57 compute-0 ceph-mon[75840]: pgmap v1456: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:07:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:07:59 compute-0 ceph-mon[75840]: pgmap v1457: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:01 compute-0 ceph-mon[75840]: pgmap v1458: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:03 compute-0 ceph-mon[75840]: pgmap v1459: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:03 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:05 compute-0 ceph-mon[75840]: pgmap v1460: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:07 compute-0 ceph-mon[75840]: pgmap v1461: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:09 compute-0 ceph-mon[75840]: pgmap v1462: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:11 compute-0 ceph-mon[75840]: pgmap v1463: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:14 compute-0 ceph-mon[75840]: pgmap v1464: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:15 compute-0 podman[287561]: 2025-11-22 06:08:15.317688327 +0000 UTC m=+0.169088662 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 22 06:08:16 compute-0 ceph-mon[75840]: pgmap v1465: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:17 compute-0 ceph-mon[75840]: pgmap v1466: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:20 compute-0 ceph-mon[75840]: pgmap v1467: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:22 compute-0 ceph-mon[75840]: pgmap v1468: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:23 compute-0 ceph-mon[75840]: pgmap v1469: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.182 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.183 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.184 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:08:25 compute-0 podman[287589]: 2025-11-22 06:08:25.234016574 +0000 UTC m=+0.082771778 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 06:08:25 compute-0 podman[287588]: 2025-11-22 06:08:25.253839988 +0000 UTC m=+0.102956262 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:08:25 compute-0 ceph-mon[75840]: pgmap v1470: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:08:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327703627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.688 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.895 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.897 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4986MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.897 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.898 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.991 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:08:25 compute-0 nova_compute[255660]: 2025-11-22 06:08:25.992 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.020 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:08:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1327703627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:08:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:08:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335912961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.495 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.502 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.518 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.520 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:08:26 compute-0 nova_compute[255660]: 2025-11-22 06:08:26.520 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:08:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:27 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/335912961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:08:27 compute-0 ceph-mon[75840]: pgmap v1471: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:30 compute-0 ceph-mon[75840]: pgmap v1472: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:30 compute-0 nova_compute[255660]: 2025-11-22 06:08:30.522 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:32 compute-0 ceph-mon[75840]: pgmap v1473: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:33 compute-0 nova_compute[255660]: 2025-11-22 06:08:33.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:33 compute-0 nova_compute[255660]: 2025-11-22 06:08:33.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:33 compute-0 nova_compute[255660]: 2025-11-22 06:08:33.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:08:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:33 compute-0 ceph-mon[75840]: pgmap v1474: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:33 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:35 compute-0 nova_compute[255660]: 2025-11-22 06:08:35.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:35 compute-0 nova_compute[255660]: 2025-11-22 06:08:35.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:36 compute-0 ceph-mon[75840]: pgmap v1475: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:08:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:08:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:08:36.951 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:08:37 compute-0 nova_compute[255660]: 2025-11-22 06:08:37.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:37 compute-0 nova_compute[255660]: 2025-11-22 06:08:37.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:37 compute-0 ceph-mon[75840]: pgmap v1476: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:38 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:39 compute-0 ceph-mon[75840]: pgmap v1477: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:40 compute-0 nova_compute[255660]: 2025-11-22 06:08:40.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:08:40 compute-0 nova_compute[255660]: 2025-11-22 06:08:40.130 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:08:40 compute-0 nova_compute[255660]: 2025-11-22 06:08:40.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:08:40 compute-0 nova_compute[255660]: 2025-11-22 06:08:40.146 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:08:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.271663) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721271694, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1003, "num_deletes": 251, "total_data_size": 1436497, "memory_usage": 1460384, "flush_reason": "Manual Compaction"}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721350874, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1422880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32321, "largest_seqno": 33323, "table_properties": {"data_size": 1417883, "index_size": 2521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10559, "raw_average_key_size": 19, "raw_value_size": 1408014, "raw_average_value_size": 2617, "num_data_blocks": 113, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791625, "oldest_key_time": 1763791625, "file_creation_time": 1763791721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 79311 microseconds, and 4058 cpu microseconds.
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:08:41 compute-0 ceph-mon[75840]: pgmap v1478: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.350970) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1422880 bytes OK
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.350996) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379776) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379804) EVENT_LOG_v1 {"time_micros": 1763791721379796, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.379827) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1431769, prev total WAL file size 1432926, number of live WAL files 2.
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.380768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1389KB)], [68(8628KB)]
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721380817, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10258039, "oldest_snapshot_seqno": -1}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6253 keys, 8474710 bytes, temperature: kUnknown
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721673187, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8474710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8434004, "index_size": 23956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 157762, "raw_average_key_size": 25, "raw_value_size": 8323007, "raw_average_value_size": 1331, "num_data_blocks": 974, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.673717) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8474710 bytes
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.682019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.1 rd, 29.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.4 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 6767, records dropped: 514 output_compression: NoCompression
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.682052) EVENT_LOG_v1 {"time_micros": 1763791721682037, "job": 38, "event": "compaction_finished", "compaction_time_micros": 292483, "compaction_time_cpu_micros": 30345, "output_level": 6, "num_output_files": 1, "total_output_size": 8474710, "num_input_records": 6767, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721682642, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791721685764, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.380677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:41 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:08:41.685835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:08:42 compute-0 sudo[287670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:42 compute-0 sudo[287670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:42 compute-0 sudo[287670]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:42 compute-0 sudo[287695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:08:42 compute-0 sudo[287695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:42 compute-0 sudo[287695]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:42 compute-0 sudo[287720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:42 compute-0 sudo[287720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:42 compute-0 sudo[287720]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:42 compute-0 sudo[287745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 06:08:42 compute-0 sudo[287745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:42 compute-0 podman[287844]: 2025-11-22 06:08:42.999184944 +0000 UTC m=+0.108882002 container exec d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 06:08:43 compute-0 podman[287844]: 2025-11-22 06:08:43.12277351 +0000 UTC m=+0.232470568 container exec_died d2c85725d384a2e19525208f0afc2b37f380a14cd233758b9d5bd2e6f7758107 (image=quay.io/ceph/ceph:v18, name=ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:43 compute-0 ceph-mon[75840]: pgmap v1479: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:08:43
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups']
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:08:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:08:43 compute-0 sudo[287745]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:43 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:08:43 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:08:44 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:44 compute-0 sudo[288005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:44 compute-0 sudo[288005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:44 compute-0 sudo[288005]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:08:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:08:44 compute-0 sudo[288030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:08:44 compute-0 sudo[288030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:44 compute-0 sudo[288030]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:44 compute-0 sudo[288055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:44 compute-0 sudo[288055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:44 compute-0 sudo[288055]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:44 compute-0 sudo[288080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:08:44 compute-0 sudo[288080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:45 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:45 compute-0 sudo[288080]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:08:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:45 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev c5283450-37bc-4f50-89ec-4e62043a10e1 does not exist
Nov 22 06:08:45 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 18798499-e11e-4907-ab75-5067c746f174 does not exist
Nov 22 06:08:45 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9a422c73-4575-4c3d-8199-ac1697c5cc49 does not exist
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:08:45 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:08:45 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:08:45 compute-0 sudo[288136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:45 compute-0 sudo[288136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:45 compute-0 sudo[288136]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:45 compute-0 sudo[288161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:08:45 compute-0 sudo[288161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:45 compute-0 sudo[288161]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:45 compute-0 sudo[288187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:45 compute-0 sudo[288187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:45 compute-0 sudo[288187]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:45 compute-0 podman[288185]: 2025-11-22 06:08:45.544611212 +0000 UTC m=+0.139601199 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 22 06:08:45 compute-0 sudo[288228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:08:45 compute-0 sudo[288228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:45 compute-0 podman[288305]: 2025-11-22 06:08:45.959107768 +0000 UTC m=+0.055925787 container create 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:08:46 compute-0 ceph-mon[75840]: pgmap v1480: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:08:46 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:08:46 compute-0 systemd[1]: Started libpod-conmon-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope.
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:45.932714907 +0000 UTC m=+0.029532926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:46.058571704 +0000 UTC m=+0.155389793 container init 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:46.069032997 +0000 UTC m=+0.165851016 container start 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:46.072947341 +0000 UTC m=+0.169765430 container attach 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:08:46 compute-0 systemd[1]: libpod-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope: Deactivated successfully.
Nov 22 06:08:46 compute-0 stoic_kare[288321]: 167 167
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:46.079810506 +0000 UTC m=+0.176628545 container died 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:46 compute-0 conmon[288321]: conmon 8c8f9db2a2da84e61741 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope/container/memory.events
Nov 22 06:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b01e8c561a2b5abc52c58b64c3ddcd1f321306d3c656c3d2b7d82559cb3f0d7-merged.mount: Deactivated successfully.
Nov 22 06:08:46 compute-0 podman[288305]: 2025-11-22 06:08:46.137215621 +0000 UTC m=+0.234033640 container remove 8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 06:08:46 compute-0 systemd[1]: libpod-conmon-8c8f9db2a2da84e617414b94d7ad9d13d881e7a5874b096eb1b496ecd23b19c6.scope: Deactivated successfully.
Nov 22 06:08:46 compute-0 podman[288344]: 2025-11-22 06:08:46.378035002 +0000 UTC m=+0.075255636 container create e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:08:46 compute-0 podman[288344]: 2025-11-22 06:08:46.346114813 +0000 UTC m=+0.043335487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:46 compute-0 systemd[1]: Started libpod-conmon-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope.
Nov 22 06:08:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:46 compute-0 podman[288344]: 2025-11-22 06:08:46.508878134 +0000 UTC m=+0.206098758 container init e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:08:46 compute-0 podman[288344]: 2025-11-22 06:08:46.520660681 +0000 UTC m=+0.217881285 container start e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:08:46 compute-0 podman[288344]: 2025-11-22 06:08:46.525027518 +0000 UTC m=+0.222248212 container attach e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:08:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:08:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:08:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:08:47 compute-0 ceph-mon[75840]: pgmap v1481: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:08:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/288669014' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:08:47 compute-0 vibrant_greider[288360]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:08:47 compute-0 vibrant_greider[288360]: --> relative data size: 1.0
Nov 22 06:08:47 compute-0 vibrant_greider[288360]: --> All data devices are unavailable
Nov 22 06:08:47 compute-0 systemd[1]: libpod-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Deactivated successfully.
Nov 22 06:08:47 compute-0 podman[288344]: 2025-11-22 06:08:47.623617255 +0000 UTC m=+1.320837889 container died e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:47 compute-0 systemd[1]: libpod-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Consumed 1.054s CPU time.
Nov 22 06:08:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b02ff537f350ee55eeed3e892e2c2394dc556203473a33f1a48406456a98a68e-merged.mount: Deactivated successfully.
Nov 22 06:08:47 compute-0 podman[288344]: 2025-11-22 06:08:47.738015134 +0000 UTC m=+1.435235728 container remove e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_greider, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:08:47 compute-0 systemd[1]: libpod-conmon-e414e4a6197ee9f0c40df1944ada18ad97afc112d433691b2752c1af37856ef6.scope: Deactivated successfully.
Nov 22 06:08:47 compute-0 sudo[288228]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:47 compute-0 sudo[288403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:47 compute-0 sudo[288403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:47 compute-0 sudo[288403]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:47 compute-0 sudo[288428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:08:47 compute-0 sudo[288428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:47 compute-0 sudo[288428]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:48 compute-0 sudo[288453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:48 compute-0 sudo[288453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:48 compute-0 sudo[288453]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:48 compute-0 sudo[288478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:08:48 compute-0 sudo[288478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:48 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:48 compute-0 podman[288543]: 2025-11-22 06:08:48.562801452 +0000 UTC m=+0.032933618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:48 compute-0 podman[288543]: 2025-11-22 06:08:48.665132016 +0000 UTC m=+0.135264122 container create f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 06:08:48 compute-0 systemd[1]: Started libpod-conmon-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope.
Nov 22 06:08:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:48 compute-0 podman[288543]: 2025-11-22 06:08:48.92874941 +0000 UTC m=+0.398881566 container init f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:08:48 compute-0 podman[288543]: 2025-11-22 06:08:48.934412133 +0000 UTC m=+0.404544209 container start f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 06:08:48 compute-0 trusting_hamilton[288559]: 167 167
Nov 22 06:08:48 compute-0 systemd[1]: libpod-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope: Deactivated successfully.
Nov 22 06:08:49 compute-0 podman[288543]: 2025-11-22 06:08:49.125837905 +0000 UTC m=+0.595970531 container attach f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:49 compute-0 podman[288543]: 2025-11-22 06:08:49.126404 +0000 UTC m=+0.596536106 container died f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:08:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:49 compute-0 ceph-mon[75840]: pgmap v1482: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bb915d66deb03c525bcca29de0b86233cf4345f414f273a1f98c5c8ed9f2431-merged.mount: Deactivated successfully.
Nov 22 06:08:50 compute-0 podman[288543]: 2025-11-22 06:08:50.190073578 +0000 UTC m=+1.660205684 container remove f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hamilton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 06:08:50 compute-0 systemd[1]: libpod-conmon-f59be56023e1a105e8f28a063a36b61697b94d0d3208d54547c06052f4b9b49c.scope: Deactivated successfully.
Nov 22 06:08:50 compute-0 podman[288583]: 2025-11-22 06:08:50.390355388 +0000 UTC m=+0.043201784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:50 compute-0 podman[288583]: 2025-11-22 06:08:50.745585609 +0000 UTC m=+0.398431955 container create abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 06:08:50 compute-0 systemd[1]: Started libpod-conmon-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope.
Nov 22 06:08:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:51 compute-0 podman[288583]: 2025-11-22 06:08:51.010883349 +0000 UTC m=+0.663729675 container init abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:08:51 compute-0 podman[288583]: 2025-11-22 06:08:51.020411295 +0000 UTC m=+0.673257611 container start abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:08:51 compute-0 podman[288583]: 2025-11-22 06:08:51.033877948 +0000 UTC m=+0.686724304 container attach abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:08:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:51 compute-0 ceph-mon[75840]: pgmap v1483: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]: {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     "0": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "devices": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "/dev/loop3"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             ],
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_name": "ceph_lv0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_size": "21470642176",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "name": "ceph_lv0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "tags": {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_name": "ceph",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.crush_device_class": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.encrypted": "0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_id": "0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.vdo": "0"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             },
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "vg_name": "ceph_vg0"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         }
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     ],
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     "1": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "devices": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "/dev/loop4"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             ],
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_name": "ceph_lv1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_size": "21470642176",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "name": "ceph_lv1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "tags": {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_name": "ceph",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.crush_device_class": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.encrypted": "0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_id": "1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.vdo": "0"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             },
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "vg_name": "ceph_vg1"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         }
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     ],
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     "2": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "devices": [
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "/dev/loop5"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             ],
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_name": "ceph_lv2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_size": "21470642176",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "name": "ceph_lv2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "tags": {
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.cluster_name": "ceph",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.crush_device_class": "",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.encrypted": "0",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osd_id": "2",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:                 "ceph.vdo": "0"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             },
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "type": "block",
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:             "vg_name": "ceph_vg2"
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:         }
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]:     ]
Nov 22 06:08:51 compute-0 vibrant_bohr[288600]: }
Nov 22 06:08:51 compute-0 systemd[1]: libpod-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope: Deactivated successfully.
Nov 22 06:08:51 compute-0 podman[288583]: 2025-11-22 06:08:51.775013095 +0000 UTC m=+1.427859451 container died abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 06:08:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cc429da792f199239f5bf5571b4bde8205b971e326be170618ca1852db01a34-merged.mount: Deactivated successfully.
Nov 22 06:08:51 compute-0 podman[288583]: 2025-11-22 06:08:51.95876411 +0000 UTC m=+1.611610436 container remove abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 06:08:51 compute-0 systemd[1]: libpod-conmon-abb846401cfaf2751f9a81b2a0cb72ef0426d40d5f47b642af971f6019180a1f.scope: Deactivated successfully.
Nov 22 06:08:52 compute-0 sudo[288478]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:52 compute-0 sudo[288621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:52 compute-0 sudo[288621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:52 compute-0 sudo[288621]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:52 compute-0 sudo[288646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:08:52 compute-0 sudo[288646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:52 compute-0 sudo[288646]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:52 compute-0 sudo[288671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:52 compute-0 sudo[288671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:52 compute-0 sudo[288671]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:52 compute-0 sudo[288696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:08:52 compute-0 sudo[288696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:08:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.204801836 +0000 UTC m=+0.042596297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.336930321 +0000 UTC m=+0.174724742 container create d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 06:08:53 compute-0 systemd[1]: Started libpod-conmon-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope.
Nov 22 06:08:53 compute-0 ceph-mon[75840]: pgmap v1484: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.42864905 +0000 UTC m=+0.266443511 container init d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.436022078 +0000 UTC m=+0.273816489 container start d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.440120719 +0000 UTC m=+0.277915140 container attach d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 06:08:53 compute-0 magical_hugle[288777]: 167 167
Nov 22 06:08:53 compute-0 systemd[1]: libpod-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope: Deactivated successfully.
Nov 22 06:08:53 compute-0 conmon[288777]: conmon d194c11c2c155aee8847 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope/container/memory.events
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.446382388 +0000 UTC m=+0.284176799 container died d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 06:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9460ec28d158d38c028abd02abcda9cfcb1df88568f6fc0f88ac313f5dcd4792-merged.mount: Deactivated successfully.
Nov 22 06:08:53 compute-0 podman[288761]: 2025-11-22 06:08:53.5014592 +0000 UTC m=+0.339253611 container remove d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hugle, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 06:08:53 compute-0 systemd[1]: libpod-conmon-d194c11c2c155aee8847fc3fdf7c14b36d48cdc7705d3386ff89e8fc4c58099e.scope: Deactivated successfully.
Nov 22 06:08:53 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:53 compute-0 podman[288800]: 2025-11-22 06:08:53.796272564 +0000 UTC m=+0.105820349 container create b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:08:53 compute-0 podman[288800]: 2025-11-22 06:08:53.727732219 +0000 UTC m=+0.037280064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:08:53 compute-0 systemd[1]: Started libpod-conmon-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope.
Nov 22 06:08:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:08:53 compute-0 podman[288800]: 2025-11-22 06:08:53.98935688 +0000 UTC m=+0.298904725 container init b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 06:08:54 compute-0 podman[288800]: 2025-11-22 06:08:54.002284939 +0000 UTC m=+0.311832694 container start b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 22 06:08:54 compute-0 podman[288800]: 2025-11-22 06:08:54.006759559 +0000 UTC m=+0.316307344 container attach b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:08:55 compute-0 kind_wilson[288816]: {
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_id": 1,
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "type": "bluestore"
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     },
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_id": 2,
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "type": "bluestore"
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     },
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_id": 0,
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:08:55 compute-0 kind_wilson[288816]:         "type": "bluestore"
Nov 22 06:08:55 compute-0 kind_wilson[288816]:     }
Nov 22 06:08:55 compute-0 kind_wilson[288816]: }
Nov 22 06:08:55 compute-0 systemd[1]: libpod-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Deactivated successfully.
Nov 22 06:08:55 compute-0 podman[288800]: 2025-11-22 06:08:55.089363486 +0000 UTC m=+1.398911281 container died b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:08:55 compute-0 systemd[1]: libpod-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Consumed 1.091s CPU time.
Nov 22 06:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba776ffe170deedabe4ceb155424871ab15b928fd29ece12bcd3f85c1ad2d38-merged.mount: Deactivated successfully.
Nov 22 06:08:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:55 compute-0 ceph-mon[75840]: pgmap v1485: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:56 compute-0 podman[288800]: 2025-11-22 06:08:56.124547707 +0000 UTC m=+2.434095502 container remove b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 06:08:56 compute-0 systemd[1]: libpod-conmon-b3addfa50b2c934ba62853695afb6144ab850cd6aea2a3cab7057b1f036f72ab.scope: Deactivated successfully.
Nov 22 06:08:56 compute-0 sudo[288696]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:08:56 compute-0 podman[288863]: 2025-11-22 06:08:56.242268545 +0000 UTC m=+0.088165834 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 22 06:08:56 compute-0 podman[288864]: 2025-11-22 06:08:56.271204304 +0000 UTC m=+0.116768453 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 06:08:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:56 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:08:56 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 85385dd6-182b-4f4f-9d17-5084148fcf52 does not exist
Nov 22 06:08:56 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 5dbc774d-9a51-4542-87b5-c3d5af2a8f3e does not exist
Nov 22 06:08:56 compute-0 sudo[288903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:08:56 compute-0 sudo[288903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:56 compute-0 sudo[288903]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:56 compute-0 sudo[288928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:08:56 compute-0 sudo[288928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:08:56 compute-0 sudo[288928]: pam_unix(sudo:session): session closed for user root
Nov 22 06:08:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:08:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:08:59 compute-0 ceph-mon[75840]: pgmap v1486: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:08:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:00 compute-0 ceph-mon[75840]: pgmap v1487: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:01 compute-0 ceph-mon[75840]: pgmap v1488: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:03 compute-0 ceph-mon[75840]: pgmap v1489: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:05 compute-0 ceph-mon[75840]: pgmap v1490: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:07 compute-0 ceph-mon[75840]: pgmap v1491: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:09 compute-0 ceph-mon[75840]: pgmap v1492: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:12 compute-0 ceph-mon[75840]: pgmap v1493: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:13 compute-0 ceph-mon[75840]: pgmap v1494: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:16 compute-0 podman[288953]: 2025-11-22 06:09:16.245465072 +0000 UTC m=+0.099182251 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:09:16 compute-0 ceph-mon[75840]: pgmap v1495: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:17 compute-0 ceph-mon[75840]: pgmap v1496: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:19 compute-0 ceph-mon[75840]: pgmap v1497: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:21 compute-0 ceph-mon[75840]: pgmap v1498: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:21 compute-0 sshd-session[288979]: Invalid user binance from 80.94.92.166 port 33942
Nov 22 06:09:21 compute-0 sshd-session[288979]: Connection closed by invalid user binance 80.94.92.166 port 33942 [preauth]
Nov 22 06:09:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:23 compute-0 ceph-mon[75840]: pgmap v1499: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.128 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.155 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.156 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:09:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:25 compute-0 ceph-mon[75840]: pgmap v1500: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:09:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469962475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.661 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.837 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.839 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.903 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.904 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:09:25 compute-0 nova_compute[255660]: 2025-11-22 06:09:25.926 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:09:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1469962475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:09:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:09:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523048568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:09:26 compute-0 nova_compute[255660]: 2025-11-22 06:09:26.354 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:09:26 compute-0 nova_compute[255660]: 2025-11-22 06:09:26.360 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:09:26 compute-0 nova_compute[255660]: 2025-11-22 06:09:26.375 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:09:26 compute-0 nova_compute[255660]: 2025-11-22 06:09:26.378 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:09:26 compute-0 nova_compute[255660]: 2025-11-22 06:09:26.378 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:09:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:27 compute-0 podman[289026]: 2025-11-22 06:09:27.229156909 +0000 UTC m=+0.074304881 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 06:09:27 compute-0 podman[289025]: 2025-11-22 06:09:27.247692927 +0000 UTC m=+0.097511046 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:09:27 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/523048568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:09:27 compute-0 ceph-mon[75840]: pgmap v1501: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:29 compute-0 ceph-mon[75840]: pgmap v1502: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:31 compute-0 ceph-mon[75840]: pgmap v1503: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:31 compute-0 nova_compute[255660]: 2025-11-22 06:09:31.380 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:33 compute-0 nova_compute[255660]: 2025-11-22 06:09:33.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:33 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:33 compute-0 ceph-mon[75840]: pgmap v1504: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:34 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:34 compute-0 nova_compute[255660]: 2025-11-22 06:09:34.125 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:34 compute-0 nova_compute[255660]: 2025-11-22 06:09:34.148 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:34 compute-0 nova_compute[255660]: 2025-11-22 06:09:34.148 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 06:09:35 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:35 compute-0 ceph-mon[75840]: pgmap v1505: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:36 compute-0 nova_compute[255660]: 2025-11-22 06:09:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:36 compute-0 nova_compute[255660]: 2025-11-22 06:09:36.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.952 164618 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:09:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.953 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:09:36 compute-0 ovn_metadata_agent[164613]: 2025-11-22 06:09:36.953 164618 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:09:37 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:37 compute-0 ceph-mon[75840]: pgmap v1506: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:39 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:39 compute-0 nova_compute[255660]: 2025-11-22 06:09:39.130 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:39 compute-0 nova_compute[255660]: 2025-11-22 06:09:39.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:39 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:39 compute-0 ceph-mon[75840]: pgmap v1507: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:41 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:41 compute-0 ceph-mon[75840]: pgmap v1508: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:42 compute-0 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:09:42 compute-0 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 06:09:42 compute-0 nova_compute[255660]: 2025-11-22 06:09:42.131 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 06:09:42 compute-0 nova_compute[255660]: 2025-11-22 06:09:42.154 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:43 compute-0 ceph-mon[75840]: pgmap v1509: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2025-11-22_06:09:43
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [balancer INFO root] do_upmap
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups']
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 changes
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:09:43 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:09:44 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:09:44 compute-0 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 06:09:45 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:45 compute-0 ceph-mon[75840]: pgmap v1510: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:47 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 06:09:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:09:47 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 06:09:47 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:09:47 compute-0 ceph-mon[75840]: pgmap v1511: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 06:09:47 compute-0 ceph-mon[75840]: from='client.? 192.168.122.10:0/2383423483' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 06:09:47 compute-0 podman[289062]: 2025-11-22 06:09:47.320455267 +0000 UTC m=+0.169579931 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 22 06:09:49 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:49 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:49 compute-0 ceph-mon[75840]: pgmap v1512: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:51 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:51 compute-0 ceph-mon[75840]: pgmap v1513: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:09:53 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:09:53 compute-0 ceph-mon[75840]: pgmap v1514: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:54 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:55 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:55 compute-0 ceph-mon[75840]: pgmap v1515: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:56 compute-0 sudo[289090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:09:56 compute-0 sudo[289090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:56 compute-0 sudo[289090]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:56 compute-0 sshd-session[289088]: Accepted publickey for zuul from 192.168.122.10 port 43036 ssh2: ECDSA SHA256:C6JlrRXFSyxIQXKBMUSI9j4HdKFSQKMseEdc5EcuVHM
Nov 22 06:09:56 compute-0 systemd-logind[798]: New session 54 of user zuul.
Nov 22 06:09:56 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 22 06:09:56 compute-0 sshd-session[289088]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 06:09:56 compute-0 sudo[289115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:09:56 compute-0 sudo[289115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:56 compute-0 sudo[289115]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:56 compute-0 sudo[289142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:09:56 compute-0 sudo[289142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:56 compute-0 sudo[289142]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:56 compute-0 sudo[289163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 22 06:09:56 compute-0 sudo[289163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 06:09:56 compute-0 sudo[289191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 06:09:56 compute-0 sudo[289191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:57 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:57 compute-0 sudo[289191]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:09:57 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:09:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 06:09:57 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:09:57 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 06:09:57 compute-0 ceph-mon[75840]: pgmap v1516: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:58 compute-0 podman[289256]: 2025-11-22 06:09:58.018127362 +0000 UTC m=+0.085023471 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 06:09:58 compute-0 podman[289257]: 2025-11-22 06:09:58.047343773 +0000 UTC m=+0.120156910 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 06:09:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:09:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev cddb69e4-b68c-439f-acbc-08a4f3edbe43 does not exist
Nov 22 06:09:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev f38dc9e0-6f6d-4bbf-9726-8f4a6c64751f does not exist
Nov 22 06:09:58 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev a9c86435-e90a-4966-aa81-3c371a0bc942 does not exist
Nov 22 06:09:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 06:09:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:09:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 06:09:58 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:09:58 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:09:58 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:09:58 compute-0 sudo[289342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:09:58 compute-0 sudo[289342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:58 compute-0 sudo[289342]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:58 compute-0 sudo[289373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:09:58 compute-0 sudo[289373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:58 compute-0 sudo[289373]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:58 compute-0 sudo[289410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:09:58 compute-0 sudo[289410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:58 compute-0 sudo[289410]: pam_unix(sudo:session): session closed for user root
Nov 22 06:09:58 compute-0 sudo[289438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 06:09:58 compute-0 sudo[289438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 06:09:59 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:09:59 compute-0 podman[289538]: 2025-11-22 06:09:59.078990901 +0000 UTC m=+0.023461438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:09:59 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:09:59 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:09:59 compute-0 podman[289538]: 2025-11-22 06:09:59.525713864 +0000 UTC m=+0.470184381 container create b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 06:09:59 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 22 06:09:59 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:09:59.744174) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 06:09:59 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 22 06:09:59 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791799744253, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 883, "num_deletes": 255, "total_data_size": 1184748, "memory_usage": 1202480, "flush_reason": "Manual Compaction"}
Nov 22 06:09:59 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 22 06:09:59 compute-0 systemd[1]: Started libpod-conmon-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope.
Nov 22 06:09:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800026072, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1162748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33324, "largest_seqno": 34206, "table_properties": {"data_size": 1158315, "index_size": 2085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9621, "raw_average_key_size": 19, "raw_value_size": 1149390, "raw_average_value_size": 2289, "num_data_blocks": 93, "num_entries": 502, "num_filter_entries": 502, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763791721, "oldest_key_time": 1763791721, "file_creation_time": 1763791799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 281978 microseconds, and 7059 cpu microseconds.
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:10:00 compute-0 podman[289538]: 2025-11-22 06:10:00.030330224 +0000 UTC m=+0.974800821 container init b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 06:10:00 compute-0 podman[289538]: 2025-11-22 06:10:00.039906309 +0000 UTC m=+0.984376826 container start b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 06:10:00 compute-0 gallant_sinoussi[289567]: 167 167
Nov 22 06:10:00 compute-0 systemd[1]: libpod-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope: Deactivated successfully.
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.026154) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1162748 bytes OK
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.026189) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.365945) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.365981) EVENT_LOG_v1 {"time_micros": 1763791800365970, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366005) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1180400, prev total WAL file size 1181557, number of live WAL files 2.
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366907) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303039' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1135KB)], [71(8276KB)]
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800366945, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9637458, "oldest_snapshot_seqno": -1}
Nov 22 06:10:00 compute-0 podman[289538]: 2025-11-22 06:10:00.425397317 +0000 UTC m=+1.369867914 container attach b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:10:00 compute-0 podman[289547]: 2025-11-22 06:10:00.42627636 +0000 UTC m=+1.352446298 container died b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 06:10:00 compute-0 ceph-mon[75840]: pgmap v1517: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6233 keys, 9375817 bytes, temperature: kUnknown
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800486867, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9375817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9334259, "index_size": 24872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 158273, "raw_average_key_size": 25, "raw_value_size": 9222541, "raw_average_value_size": 1479, "num_data_blocks": 1009, "num_entries": 6233, "num_filter_entries": 6233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763789034, "oldest_key_time": 0, "file_creation_time": 1763791800, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c4e45ab2-4273-47c3-96b1-648e5316c944", "db_session_id": "OCOOLGAJEIQ903CUBBA6", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.487238) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9375817 bytes
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.494278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.3 rd, 78.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(16.4) write-amplify(8.1) OK, records in: 6755, records dropped: 522 output_compression: NoCompression
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.494300) EVENT_LOG_v1 {"time_micros": 1763791800494289, "job": 40, "event": "compaction_finished", "compaction_time_micros": 119997, "compaction_time_cpu_micros": 43836, "output_level": 6, "num_output_files": 1, "total_output_size": 9375817, "num_input_records": 6755, "num_output_records": 6233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800494625, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763791800496121, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.366850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 ceph-mon[75840]: rocksdb: (Original Log Time 2025/11/22-06:10:00.496208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 06:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-47711594683d10edb41c568d354586c6124526cf77bfaf8062f1f82800152f23-merged.mount: Deactivated successfully.
Nov 22 06:10:00 compute-0 podman[289538]: 2025-11-22 06:10:00.615030702 +0000 UTC m=+1.559501219 container remove b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 06:10:00 compute-0 systemd[1]: libpod-conmon-b89f4849e4c28e63f1515831a6cc724af68013b03ba6c6a385955ed0a232f999.scope: Deactivated successfully.
Nov 22 06:10:00 compute-0 podman[289619]: 2025-11-22 06:10:00.78524868 +0000 UTC m=+0.042361952 container create 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 06:10:00 compute-0 systemd[1]: Started libpod-conmon-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope.
Nov 22 06:10:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:00 compute-0 podman[289619]: 2025-11-22 06:10:00.767141006 +0000 UTC m=+0.024254298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:10:00 compute-0 podman[289619]: 2025-11-22 06:10:00.906118408 +0000 UTC m=+0.163231730 container init 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 06:10:00 compute-0 podman[289619]: 2025-11-22 06:10:00.917652776 +0000 UTC m=+0.174766058 container start 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 06:10:00 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14815 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:00 compute-0 podman[289619]: 2025-11-22 06:10:00.922557268 +0000 UTC m=+0.179670610 container attach 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 06:10:01 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:01 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14817 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:01 compute-0 ceph-mon[75840]: from='client.14815 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:01 compute-0 ceph-mon[75840]: pgmap v1518: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:01 compute-0 exciting_franklin[289636]: --> passed data devices: 0 physical, 3 LVM
Nov 22 06:10:01 compute-0 exciting_franklin[289636]: --> relative data size: 1.0
Nov 22 06:10:01 compute-0 exciting_franklin[289636]: --> All data devices are unavailable
Nov 22 06:10:01 compute-0 systemd[1]: libpod-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope: Deactivated successfully.
Nov 22 06:10:01 compute-0 podman[289619]: 2025-11-22 06:10:01.98280631 +0000 UTC m=+1.239919632 container died 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:10:02 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 06:10:02 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562757294' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8e3f52da79f2f034bda4b90fc00118b9aa77e9fd7ab421e473528da6d9085b-merged.mount: Deactivated successfully.
Nov 22 06:10:02 compute-0 podman[289619]: 2025-11-22 06:10:02.123189329 +0000 UTC m=+1.380302601 container remove 0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_franklin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:10:02 compute-0 systemd[1]: libpod-conmon-0610f50453bc506c714639630f840bd6b7143c4e67b36af70c9e99a36b301573.scope: Deactivated successfully.
Nov 22 06:10:02 compute-0 sudo[289438]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:02 compute-0 sudo[289746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:10:02 compute-0 sudo[289746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:02 compute-0 sudo[289746]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:02 compute-0 sudo[289779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:10:02 compute-0 sudo[289779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:02 compute-0 sudo[289779]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:02 compute-0 sudo[289804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:10:02 compute-0 sudo[289804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:02 compute-0 sudo[289804]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:02 compute-0 sudo[289829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- lvm list --format json
Nov 22 06:10:02 compute-0 sudo[289829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:02 compute-0 ceph-mon[75840]: from='client.14817 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:02 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2562757294' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.776126322 +0000 UTC m=+0.047263374 container create ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:10:02 compute-0 systemd[1]: Started libpod-conmon-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope.
Nov 22 06:10:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.751571185 +0000 UTC m=+0.022708197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.855314277 +0000 UTC m=+0.126451319 container init ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.864561274 +0000 UTC m=+0.135698306 container start ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:10:02 compute-0 elegant_leakey[289919]: 167 167
Nov 22 06:10:02 compute-0 systemd[1]: libpod-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope: Deactivated successfully.
Nov 22 06:10:02 compute-0 conmon[289919]: conmon ab7a0482abab772f2f32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope/container/memory.events
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.871699725 +0000 UTC m=+0.142836787 container attach ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.872135326 +0000 UTC m=+0.143272348 container died ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 06:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e223320b65549e2ec0b10ff747c3dc9a9d6ba3002e12074442b93f135b91fdf3-merged.mount: Deactivated successfully.
Nov 22 06:10:02 compute-0 podman[289902]: 2025-11-22 06:10:02.916738257 +0000 UTC m=+0.187875279 container remove ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 06:10:02 compute-0 systemd[1]: libpod-conmon-ab7a0482abab772f2f32b0f71541c5d345c72af1053956e41bd29c115cb6ff2b.scope: Deactivated successfully.
Nov 22 06:10:03 compute-0 podman[289943]: 2025-11-22 06:10:03.128743981 +0000 UTC m=+0.072690703 container create 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:10:03 compute-0 systemd[1]: Started libpod-conmon-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope.
Nov 22 06:10:03 compute-0 podman[289943]: 2025-11-22 06:10:03.090828058 +0000 UTC m=+0.034774790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:10:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:03 compute-0 podman[289943]: 2025-11-22 06:10:03.220629895 +0000 UTC m=+0.164576597 container init 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 06:10:03 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:03 compute-0 podman[289943]: 2025-11-22 06:10:03.233353645 +0000 UTC m=+0.177300337 container start 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:10:03 compute-0 podman[289943]: 2025-11-22 06:10:03.236385256 +0000 UTC m=+0.180331978 container attach 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:10:03 compute-0 ceph-mon[75840]: pgmap v1519: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:04 compute-0 optimistic_gould[289960]: {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     "0": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "devices": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "/dev/loop3"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             ],
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_name": "ceph_lv0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_size": "21470642176",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a5feb48b-30da-4436-abf9-8885d26e1de8,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "name": "ceph_lv0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "tags": {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_uuid": "V95jvJ-YKfN-5AFp-cBrc-Aenp-dm9b-5A1fgw",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_name": "ceph",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.crush_device_class": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.encrypted": "0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_fsid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_id": "0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.vdo": "0"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             },
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "vg_name": "ceph_vg0"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         }
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     ],
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     "1": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "devices": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "/dev/loop4"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             ],
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_name": "ceph_lv1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_size": "21470642176",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=1fb2d706-3ef2-43d5-9448-a482f97db695,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "name": "ceph_lv1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "tags": {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_uuid": "ke4vqf-o1C8-nSut-ATT5-Ky4f-pmxL-XWvAQW",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_name": "ceph",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.crush_device_class": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.encrypted": "0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_fsid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_id": "1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.vdo": "0"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             },
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "vg_name": "ceph_vg1"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         }
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     ],
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     "2": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "devices": [
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "/dev/loop5"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             ],
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_name": "ceph_lv2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_size": "21470642176",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=13fdadc6-d566-5465-9ac8-a148ef130da1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=315eef4c-16c8-4117-80ec-ccdc45d85649,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "lv_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "name": "ceph_lv2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "tags": {
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.block_uuid": "vtYLGx-FS3N-1qFS-3lR7-dDt9-gL2U-XZw0pU",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.cluster_name": "ceph",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.crush_device_class": "",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.encrypted": "0",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_fsid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osd_id": "2",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:                 "ceph.vdo": "0"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             },
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "type": "block",
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:             "vg_name": "ceph_vg2"
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:         }
Nov 22 06:10:04 compute-0 optimistic_gould[289960]:     ]
Nov 22 06:10:04 compute-0 optimistic_gould[289960]: }
Nov 22 06:10:04 compute-0 systemd[1]: libpod-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope: Deactivated successfully.
Nov 22 06:10:04 compute-0 podman[289943]: 2025-11-22 06:10:04.069530812 +0000 UTC m=+1.013477534 container died 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 06:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f116d507dc88c4b4345cdd656e292c389abaa9493bc2aa1f05080e96fe3dcc-merged.mount: Deactivated successfully.
Nov 22 06:10:04 compute-0 podman[289943]: 2025-11-22 06:10:04.351082193 +0000 UTC m=+1.295028895 container remove 1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_gould, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 22 06:10:04 compute-0 systemd[1]: libpod-conmon-1bedb9ae87539efa60ad5e9c0ef7945a9029bd2a40971917cba1545ef4e8d9ca.scope: Deactivated successfully.
Nov 22 06:10:04 compute-0 sudo[289829]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:04 compute-0 sudo[289985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:10:04 compute-0 sudo[289985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:04 compute-0 sudo[289985]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:04 compute-0 sudo[290010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 06:10:04 compute-0 sudo[290010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:04 compute-0 sudo[290010]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:04 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:04 compute-0 sudo[290035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:10:04 compute-0 sudo[290035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:04 compute-0 sudo[290035]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:04 compute-0 sudo[290060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/13fdadc6-d566-5465-9ac8-a148ef130da1/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 13fdadc6-d566-5465-9ac8-a148ef130da1 -- raw list --format json
Nov 22 06:10:04 compute-0 sudo[290060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:04 compute-0 ovs-vsctl[290138]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 06:10:04 compute-0 podman[290155]: 2025-11-22 06:10:04.942420689 +0000 UTC m=+0.038424188 container create a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 06:10:04 compute-0 systemd[1]: Started libpod-conmon-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope.
Nov 22 06:10:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:04.926267617 +0000 UTC m=+0.022271136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:05.030195754 +0000 UTC m=+0.126199303 container init a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:05.038514267 +0000 UTC m=+0.134517756 container start a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:05.042446261 +0000 UTC m=+0.138449750 container attach a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:10:05 compute-0 blissful_shockley[290184]: 167 167
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:05.044763713 +0000 UTC m=+0.140767192 container died a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 06:10:05 compute-0 systemd[1]: libpod-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope: Deactivated successfully.
Nov 22 06:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c8810eeee41da6ab90c5dd9506ff7f8cd2c105670c56483070c20cbe75f6b29-merged.mount: Deactivated successfully.
Nov 22 06:10:05 compute-0 podman[290155]: 2025-11-22 06:10:05.094486801 +0000 UTC m=+0.190490290 container remove a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shockley, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 06:10:05 compute-0 systemd[1]: libpod-conmon-a1342c7fad4959b927dcd5d05b3a43dd6d67fc726eb0ee3ec8abc685e1f254a6.scope: Deactivated successfully.
Nov 22 06:10:05 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:05 compute-0 podman[290226]: 2025-11-22 06:10:05.312785822 +0000 UTC m=+0.068877770 container create 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 06:10:05 compute-0 systemd[1]: Started libpod-conmon-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope.
Nov 22 06:10:05 compute-0 podman[290226]: 2025-11-22 06:10:05.281103877 +0000 UTC m=+0.037195845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 06:10:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 06:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 06:10:05 compute-0 podman[290226]: 2025-11-22 06:10:05.407906913 +0000 UTC m=+0.163998871 container init 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 06:10:05 compute-0 podman[290226]: 2025-11-22 06:10:05.418119887 +0000 UTC m=+0.174211845 container start 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 06:10:05 compute-0 podman[290226]: 2025-11-22 06:10:05.422410531 +0000 UTC m=+0.178502479 container attach 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 06:10:05 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 06:10:05 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 06:10:05 compute-0 virtqemud[255182]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 06:10:06 compute-0 ceph-mon[75840]: pgmap v1520: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:06 compute-0 confident_sanderson[290247]: {
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     "1fb2d706-3ef2-43d5-9448-a482f97db695": {
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_id": 1,
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_uuid": "1fb2d706-3ef2-43d5-9448-a482f97db695",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "type": "bluestore"
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     },
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     "315eef4c-16c8-4117-80ec-ccdc45d85649": {
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_id": 2,
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_uuid": "315eef4c-16c8-4117-80ec-ccdc45d85649",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "type": "bluestore"
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     },
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     "a5feb48b-30da-4436-abf9-8885d26e1de8": {
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "ceph_fsid": "13fdadc6-d566-5465-9ac8-a148ef130da1",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_id": 0,
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "osd_uuid": "a5feb48b-30da-4436-abf9-8885d26e1de8",
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:         "type": "bluestore"
Nov 22 06:10:06 compute-0 confident_sanderson[290247]:     }
Nov 22 06:10:06 compute-0 confident_sanderson[290247]: }
Nov 22 06:10:06 compute-0 systemd[1]: libpod-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope: Deactivated successfully.
Nov 22 06:10:06 compute-0 conmon[290247]: conmon 344e5bf1639ae4c4374c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope/container/memory.events
Nov 22 06:10:06 compute-0 podman[290226]: 2025-11-22 06:10:06.408263276 +0000 UTC m=+1.164355214 container died 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 06:10:06 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: cache status {prefix=cache status} (starting...)
Nov 22 06:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffce803f848e6c79181dfce9165f3393a91041bdaf9447e60e76c2eaf47b0f1e-merged.mount: Deactivated successfully.
Nov 22 06:10:06 compute-0 podman[290226]: 2025-11-22 06:10:06.475547614 +0000 UTC m=+1.231639572 container remove 344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 06:10:06 compute-0 systemd[1]: libpod-conmon-344e5bf1639ae4c4374c830c2d838297cb9278fdc7cd558361da8bf59974346f.scope: Deactivated successfully.
Nov 22 06:10:06 compute-0 sudo[290060]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 06:10:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:10:06 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 06:10:06 compute-0 ceph-mon[75840]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:10:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 81db3c30-8a03-47f7-bea4-53062f107fd9 does not exist
Nov 22 06:10:06 compute-0 ceph-mgr[76134]: [progress WARNING root] complete: ev 9c32bb57-be9f-4e78-a679-7f025cd80996 does not exist
Nov 22 06:10:06 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: client ls {prefix=client ls} (starting...)
Nov 22 06:10:06 compute-0 sudo[290558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 06:10:06 compute-0 sudo[290558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:06 compute-0 sudo[290558]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:06 compute-0 lvm[290609]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 06:10:06 compute-0 lvm[290609]: VG ceph_vg0 finished
Nov 22 06:10:06 compute-0 lvm[290628]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 06:10:06 compute-0 lvm[290628]: VG ceph_vg1 finished
Nov 22 06:10:06 compute-0 sudo[290599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 06:10:06 compute-0 sudo[290599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 06:10:06 compute-0 sudo[290599]: pam_unix(sudo:session): session closed for user root
Nov 22 06:10:06 compute-0 lvm[290642]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 06:10:06 compute-0 lvm[290642]: VG ceph_vg2 finished
Nov 22 06:10:06 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 06:10:07 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 06:10:07 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14823 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 06:10:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:10:07 compute-0 ceph-mon[75840]: from='mgr.14132 192.168.122.100:0/4109431471' entity='mgr.compute-0.mscchl' 
Nov 22 06:10:07 compute-0 ceph-mon[75840]: from='client.14821 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:07 compute-0 ceph-mon[75840]: pgmap v1521: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 06:10:07 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 06:10:07 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666805923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 06:10:07 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 06:10:08 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14829 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:10:08 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:08.128+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:10:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 06:10:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 06:10:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069858798' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 06:10:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 06:10:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055500484' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: ops {prefix=ops} (starting...)
Nov 22 06:10:08 compute-0 ceph-mon[75840]: from='client.14823 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3666805923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: from='client.14829 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2069858798' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3055500484' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 06:10:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019329833' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 06:10:08 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 06:10:08 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582277635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 06:10:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417081325' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:09 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: session ls {prefix=session ls} (starting...)
Nov 22 06:10:09 compute-0 ceph-mds[102299]: mds.cephfs.compute-0.dntioh asok_command: status {prefix=status} (starting...)
Nov 22 06:10:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 06:10:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87260242' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14843 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4019329833' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3582277635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/417081325' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: pgmap v1522: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:09 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/87260242' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 06:10:09 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118891367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:10:09 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14847 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 06:10:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/545630278' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 06:10:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398278668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: from='client.14843 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3118891367' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: from='client.14847 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/545630278' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2398278668' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 06:10:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3631893294' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 06:10:10 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 06:10:10 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/384306363' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 06:10:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156803746' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 06:10:11 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:11.091+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 06:10:11 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14863 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 06:10:11 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976020924' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3631893294' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/384306363' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4156803746' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: from='client.14859 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mon[75840]: pgmap v1523: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:11 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2976020924' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 06:10:11 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 06:10:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2667366673' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 06:10:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931283969' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: from='client.14863 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: from='client.14865 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2667366673' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3931283969' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 06:10:12 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 06:10:12 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660957875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:26.692726+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:27.692908+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:28.693038+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:29.693153+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:30.693307+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:31.693461+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:32.693680+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:33.693827+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:34.693954+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.694178+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.694347+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.694519+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.694661+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.694797+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.694951+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.695173+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.695340+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.695544+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.695701+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 345.672515869s of 345.679687500s, submitted: 2
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.695855+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.695970+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 1769472 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.696129+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.696301+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.696520+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.697558+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 1671168 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.697639+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1662976 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.698313+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.698595+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.698867+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.699141+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1654784 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.699328+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.699516+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 1646592 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.699778+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.699920+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 1638400 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.700079+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.700225+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.700373+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 1630208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.700545+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.700794+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.700945+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68575232 unmapped: 1622016 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.701205+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68583424 unmapped: 1613824 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.701511+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.701622+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.701763+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 1605632 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.701966+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.702160+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 1597440 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.702353+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68608000 unmapped: 1589248 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.702460+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.702614+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68616192 unmapped: 1581056 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.702757+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.702888+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68624384 unmapped: 1572864 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.703098+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.703241+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.703434+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1564672 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.703543+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.703686+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1556480 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.703838+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.704003+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.704169+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.704352+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.704524+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.704711+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.704893+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.705035+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.705153+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.705359+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.705581+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.705765+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.705962+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.706099+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.706264+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.706415+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.706572+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.706703+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.706836+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.706988+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.707131+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.707297+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.707462+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.707640+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.707955+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.708125+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.708281+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.708434+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.708554+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.708706+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.708844+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.709019+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.709152+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.709328+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.709520+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.709727+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.709852+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.709971+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.710118+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.710274+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.710462+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.710639+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.710761+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.710894+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.711021+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.711194+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.711329+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.711504+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.711648+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68657152 unmapped: 1540096 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.711797+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.711978+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.712118+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.712252+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.712368+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.712534+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.712704+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.712827+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.712958+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.713106+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.713267+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.713403+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.713537+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.713685+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.713837+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.713990+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.714173+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.714322+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.714420+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.714512+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.714658+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.714778+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.714954+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.715078+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.715227+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.715374+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.715533+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.715696+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.715853+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.715979+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.716077+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.716196+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.716319+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.716502+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.716614+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.716760+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.716951+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.717679+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.717839+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.717999+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.718137+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.718290+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.718530+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.718726+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.718939+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.719082+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.719277+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.719530+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.719782+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.720205+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.720342+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.720547+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.720720+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.720894+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.721023+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.721138+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.721660+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.721799+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.722061+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.722213+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.722345+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.722627+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.722781+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.723071+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.723255+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.723405+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.723596+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.723813+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.724016+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.724151+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.724324+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.724548+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.724729+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.724923+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.725066+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.725194+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.725380+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.725530+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.725717+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.725863+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.726003+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.726141+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.726267+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.726409+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.726558+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.726698+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.726857+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.727003+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.727178+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.727336+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.727606+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.727723+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.727843+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.728029+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.728166+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.728312+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.728504+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.728728+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.728899+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.729154+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.729313+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.729529+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.729701+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.729957+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.730093+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.730230+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.730417+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.730552+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.730734+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.730913+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.731103+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.731247+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.731409+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.731557+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.731732+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.731890+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.732051+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.732211+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.732338+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.732517+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.732667+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.732809+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1466368 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.732928+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.733103+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.733253+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.733559+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.733842+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.734003+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.734125+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.734222+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.734369+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.734577+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.734720+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.734841+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.735017+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.735201+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.735382+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.735548+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.735712+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.735816+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27ae5fc00
Nov 22 06:10:12 compute-0 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:10:12 compute-0 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: get_auth_request con 0x55c27c775400 auth_method 0
Nov 22 06:10:12 compute-0 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.736006+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.736181+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.736349+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.736655+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.736784+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.736912+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.737116+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.737270+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.737431+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.737550+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.737668+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.737831+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.737982+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.738163+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.738350+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.738526+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.738705+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.738886+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.739048+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.739251+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.739412+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.739626+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.739759+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.739940+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.740101+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.740396+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.740614+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.740768+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.740883+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.741064+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.741234+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.741365+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.741513+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.741699+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.741912+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.742121+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.742279+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.742428+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.742562+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.742707+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.742869+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.743057+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.743261+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.743459+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.743625+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.743757+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.743969+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.744095+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.744245+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.744532+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.744667+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.744794+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.744923+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.745067+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.745190+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.745320+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.745524+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.745658+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.746015+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.746193+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.746313+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.746454+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.746680+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.746917+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.747092+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.747269+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 ms_handle_reset con 0x55c27d491c00 session 0x55c27c84cd20
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd64800
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.747551+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.747717+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.747818+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.747975+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.748382+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.748527+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.748691+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.748872+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.749025+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.749180+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.749353+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.749534+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.749680+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.749834+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.749982+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.750124+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.750239+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.750414+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.750577+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.750731+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.750939+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.751081+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.751258+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.751427+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.751576+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.751736+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.751861+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.752032+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.752200+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.752433+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.752915+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.753103+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.753253+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.753412+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.753539+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.753667+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.753831+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.754020+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.754211+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.754417+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.754607+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.754740+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.754889+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.755012+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.755190+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.755369+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.755576+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.755786+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.755949+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.756124+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.757770+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.757982+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.758141+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.758355+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.758615+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.758882+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.759275+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.759531+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.759759+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.759996+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:12 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:12 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.760292+0000)
Nov 22 06:10:12 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:12 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:12 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.760521+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.760771+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.760999+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.761262+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.761556+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.761799+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.762069+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.762317+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.762551+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.762770+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.762969+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.763168+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.763341+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.763882+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.764015+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.764176+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.764327+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.764579+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.764872+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.765121+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.765256+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.765392+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.765517+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.765650+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.765853+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.766030+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.766207+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.766379+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.766551+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.766907+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.767101+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.767276+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.767599+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.767744+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.767913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.768052+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.768220+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.768364+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.768518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.768675+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.768817+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.768986+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.769142+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.769321+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.769587+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.769844+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.770083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.770336+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.770583+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.770833+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.771144+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.771332+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.773643+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.773838+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.774067+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.774298+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.774571+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.774821+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.775104+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.775559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.775789+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.776084+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.776381+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.776693+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.776956+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.777219+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.777537+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.777807+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.777986+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.778251+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.778518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.778694+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.778904+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.779067+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.779210+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.779524+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.779719+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.779881+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.780026+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.780373+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.780652+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.780806+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.781105+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.781256+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.781403+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.781654+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.781945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.782193+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.782404+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.782724+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.782848+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.782973+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.783129+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.783236+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.783347+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.783622+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.783796+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.783950+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.784183+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.784611+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.784773+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.784965+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.387999+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.388129+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.388286+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.388428+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.388544+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.388674+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.388824+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.388967+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.389139+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.389263+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.389371+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.389553+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.389690+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.389825+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.389975+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.390227+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.390459+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.390641+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.390771+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.390943+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.391058+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.391212+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.391390+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.391608+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.391740+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.392009+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.392156+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.392592+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.392676+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.393016+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.393191+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.393334+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.393577+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.393706+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.393830+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.393954+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.394089+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.394282+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.394444+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.394703+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.394840+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.394963+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.395109+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.395262+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.395388+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.395524+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.395703+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.395837+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.395950+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.396093+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.396253+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.396506+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.396605+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.396707+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.396821+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.397032+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.397197+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.397351+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.397488+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.397648+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.397807+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.397950+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5749 writes, 24K keys, 5749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5749 writes, 912 syncs, 6.30 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c27a006dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.398094+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.398975+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.399163+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.399615+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.399896+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.400096+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.400271+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.400563+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.400811+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.401082+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.401363+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.401626+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.401832+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.401992+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.402215+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.402338+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.402547+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.402746+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.402976+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.403220+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.403424+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.403588+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.403758+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.403913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.404153+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.404359+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.404570+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.404784+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.404933+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.405098+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.405249+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.405615+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.405911+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.406114+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.406362+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.406534+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.406694+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.406902+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.407075+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.407321+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.407508+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.407705+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.407901+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.408121+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.408350+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.408566+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.408871+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.409149+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.409368+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.409616+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.409895+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.410116+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.410285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.410433+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.410679+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.410894+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.411094+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.411246+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.411373+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.411509+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.411666+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.411831+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.412003+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.412178+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.412397+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.412567+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.412784+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.413250+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.413520+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.413651+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.413863+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.414180+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.414406+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.414671+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.414855+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.415090+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.415303+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.415554+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.415779+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.415928+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.416135+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.100036621s of 600.098510742s, submitted: 90
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.416286+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 606208 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.416420+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.416582+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.416750+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.416912+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.417175+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.417339+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.417537+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.417752+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.417962+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.418189+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.418440+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.418590+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.418798+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.418996+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.419175+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.419338+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.419537+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.419731+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.419920+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.420072+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.420222+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.420374+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.420566+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.420778+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.420965+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.421099+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.421258+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 507904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.421424+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.421597+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.421701+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.421863+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.422015+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.422186+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.422332+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.422532+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.423075+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.423235+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.423386+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.423537+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.423713+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.423865+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.423977+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.424169+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.424330+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.424452+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 475136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.424535+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.424691+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.424833+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.424990+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.425350+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.425569+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.425883+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.426653+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.426977+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.427227+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.427565+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.427839+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.428069+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.428583+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.429160+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.429421+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 466944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.429681+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.430033+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.430365+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.430627+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.430857+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.431000+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.431166+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.431369+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.431568+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.431782+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.432039+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.432276+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.432512+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.432694+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.432894+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.433053+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.433221+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.433369+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.433871+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.433964+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.434114+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.434312+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.434442+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.434603+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.434716+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.434894+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.435078+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 458752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.435305+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.435563+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.435741+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.435899+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.436055+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.436146+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.436320+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.436443+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.436603+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.436747+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.436901+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.437054+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.437211+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.437398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.437593+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.437817+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.438020+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.438227+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.438424+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.438646+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.438827+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.439017+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.439197+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.439349+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.439541+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.439721+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.440166+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.440527+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.440757+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.440926+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.441253+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.441544+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.441980+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.442229+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.442447+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.443001+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.443232+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.443596+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.443884+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.444152+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.445149+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.445370+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.445587+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.445791+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.446140+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.446333+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.446566+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.446770+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.446974+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.447236+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.447633+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.447841+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.448057+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.448248+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.448536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.448729+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.448911+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.449102+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.449297+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.449460+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.449715+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.449876+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.450080+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.450325+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.450536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.450787+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.451002+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.451236+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.451459+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.451849+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.452086+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.452271+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.452460+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.452722+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.452983+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.453163+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.453365+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 434176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.453549+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.453720+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.453891+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.454104+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.454390+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.454625+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.454823+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.455104+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.455393+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.455572+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.455712+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.456119+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.456394+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.457509+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.457677+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.457821+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.458030+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.458259+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859594 data_alloc: 218103808 data_used: 176128
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.458413+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb85e3/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.458832+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 425984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd64c00
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.459052+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 385024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.460790+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 187.620025635s of 187.940231323s, submitted: 90
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 303104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.460953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981955 data_alloc: 218103808 data_used: 184320
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 24182784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.461132+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 130 ms_handle_reset con 0x55c27dd64c00 session 0x55c27b9cda40
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 24158208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65000
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba9a000/0x0/0x4ffc00000, data 0x10bd8da/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,1])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.461271+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fba98000/0x0/0x4ffc00000, data 0x10bd90d/0x1185000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.461568+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 131 ms_handle_reset con 0x55c27dd65000 session 0x55c27d99da40
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.461815+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.462040+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991578 data_alloc: 218103808 data_used: 188416
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.462689+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.463079+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.463303+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.463648+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72392704 unmapped: 24035328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fba94000/0x0/0x4ffc00000, data 0x10bf4a6/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228631973s of 10.493903160s, submitted: 48
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.463938+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.464329+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 24051712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.464533+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.464747+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.464995+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.465219+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.465428+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.465669+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.465880+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.466067+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 24043520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.466305+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993368 data_alloc: 218103808 data_used: 188416
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.466580+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.466758+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.466953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.467977+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.469150+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993528 data_alloc: 218103808 data_used: 192512
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba92000/0x0/0x4ffc00000, data 0x10c0f09/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.470364+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72409088 unmapped: 24018944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.470642+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.048984528s of 17.207933426s, submitted: 15
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 23986176 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65400
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.470942+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 23969792 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.471239+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 10
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba8c000/0x0/0x4ffc00000, data 0x10c6f88/0x1192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 23945216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.471957+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996008 data_alloc: 218103808 data_used: 192512
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 23879680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.472192+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 23617536 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.472763+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.473057+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba7c000/0x0/0x4ffc00000, data 0x10d4f06/0x11a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 23453696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.473288+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 23306240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.473570+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000508 data_alloc: 218103808 data_used: 192512
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 23371776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.473802+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 23240704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.474006+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 11
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.229097366s of 10.394592285s, submitted: 43
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba6b000/0x0/0x4ffc00000, data 0x10e64b7/0x11b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 23126016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.474205+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 23117824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.474383+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 23044096 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.474581+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001568 data_alloc: 218103808 data_used: 192512
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba61000/0x0/0x4ffc00000, data 0x10f0ecd/0x11bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 23019520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.474759+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.474904+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 22978560 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.475050+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba57000/0x0/0x4ffc00000, data 0x10fbde4/0x11c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 21725184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.475194+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 20561920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.475592+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fba4b000/0x0/0x4ffc00000, data 0x110823a/0x11d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008874 data_alloc: 218103808 data_used: 200704
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.475769+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba47000/0x0/0x4ffc00000, data 0x1109e20/0x11d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.475930+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 20488192 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.060367584s of 10.409746170s, submitted: 65
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.476126+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba3b000/0x0/0x4ffc00000, data 0x1116701/0x11e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.476270+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 20463616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.476423+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007998 data_alloc: 218103808 data_used: 200704
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 20398080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.476628+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 20340736 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.476892+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fba2c000/0x0/0x4ffc00000, data 0x11254f0/0x11f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 20250624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.477117+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 20373504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.477252+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 20299776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.478021+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013872 data_alloc: 218103808 data_used: 208896
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.478246+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 20242432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.478577+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76201984 unmapped: 20226048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x113fb10/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.478777+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.961433411s of 10.210658073s, submitted: 54
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.478948+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.479150+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010722 data_alloc: 218103808 data_used: 208896
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76333056 unmapped: 20094976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.479313+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 20078592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.479553+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 20054016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.479720+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9fb000/0x0/0x4ffc00000, data 0x1156181/0x1223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 20045824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.479905+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 76505088 unmapped: 19922944 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fb9f6000/0x0/0x4ffc00000, data 0x115af0e/0x1228000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.480134+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011714 data_alloc: 218103808 data_used: 208896
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.480338+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa854000/0x0/0x4ffc00000, data 0x115cd59/0x122a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.480549+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.480786+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 17793024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.710002899s of 10.840860367s, submitted: 28
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.480961+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x1166919/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.481140+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010124 data_alloc: 218103808 data_used: 208896
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.481314+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.481498+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.481642+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 17686528 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.481807+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.482027+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa84a000/0x0/0x4ffc00000, data 0x116795a/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011676 data_alloc: 218103808 data_used: 208896
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17555456 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.482200+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.482359+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78913536 unmapped: 17514496 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.482531+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fa843000/0x0/0x4ffc00000, data 0x116ec80/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 78888960 unmapped: 17539072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.881333351s of 10.000229836s, submitted: 24
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.482677+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.482869+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa82f000/0x0/0x4ffc00000, data 0x1180b27/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017194 data_alloc: 218103808 data_used: 217088
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 16433152 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.483156+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 16302080 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.483332+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0x118a04b/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 16261120 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.483576+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80207872 unmapped: 16220160 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.483758+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.483938+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020498 data_alloc: 218103808 data_used: 217088
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.484183+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.484405+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0x119b383/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80330752 unmapped: 16097280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.484584+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.363933563s of 10.000885010s, submitted: 68
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.484728+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.484969+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022282 data_alloc: 218103808 data_used: 217088
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80478208 unmapped: 15949824 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.485101+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7f7000/0x0/0x4ffc00000, data 0x11b5e3b/0x1286000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80240640 unmapped: 16187392 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.485268+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80281600 unmapped: 16146432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.485413+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80289792 unmapped: 16138240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.485569+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.485720+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1027624 data_alloc: 218103808 data_used: 225280
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 16048128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.485847+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7e0000/0x0/0x4ffc00000, data 0x11ccd00/0x129d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 80461824 unmapped: 15966208 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.485992+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 14663680 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.486108+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.663500786s of 10.003384590s, submitted: 58
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa7b5000/0x0/0x4ffc00000, data 0x11f6ec0/0x12c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.486287+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 14647296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.486518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030366 data_alloc: 218103808 data_used: 225280
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.486736+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 14467072 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa796000/0x0/0x4ffc00000, data 0x1216d90/0x12e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.486893+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.487058+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 14073856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fa783000/0x0/0x4ffc00000, data 0x122a36a/0x12fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.487223+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 13795328 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.487407+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 13811712 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1052002 data_alloc: 218103808 data_used: 233472
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.487582+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83304448 unmapped: 13123584 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.487711+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa71f000/0x0/0x4ffc00000, data 0x12889a8/0x135d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.487889+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 13221888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa708000/0x0/0x4ffc00000, data 0x12a03d6/0x1375000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.410496712s of 10.063361168s, submitted: 153
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.488041+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 12115968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.488212+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049980 data_alloc: 218103808 data_used: 233472
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.488416+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 12107776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.488647+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 12025856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.488805+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 11812864 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.488985+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 10698752 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fa6c4000/0x0/0x4ffc00000, data 0x12e34ad/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,0,0,0,2])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.489248+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 11739136 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062908 data_alloc: 218103808 data_used: 241664
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.489386+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 12328960 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa698000/0x0/0x4ffc00000, data 0x130b8fd/0x13e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.489541+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.489750+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 12050432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.489893+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.388790131s of 10.036962509s, submitted: 152
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 12066816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.490575+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 11968512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067180 data_alloc: 218103808 data_used: 249856
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.491893+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 11952128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.492468+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa65e000/0x0/0x4ffc00000, data 0x1348b92/0x1420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.493132+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 11091968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.494013+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 10993664 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.494758+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fa647000/0x0/0x4ffc00000, data 0x1360b19/0x1437000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069652 data_alloc: 218103808 data_used: 258048
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.495060+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 10870784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.495398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 10616832 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.495529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.495697+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.325051308s of 10.040717125s, submitted: 60
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.495851+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 10592256 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075246 data_alloc: 218103808 data_used: 266240
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.496173+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa629000/0x0/0x4ffc00000, data 0x137b131/0x1454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.496343+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 10559488 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.496574+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 10461184 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.496885+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.497175+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079110 data_alloc: 218103808 data_used: 266240
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.497310+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85934080 unmapped: 10493952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.497467+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 10428416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa60c000/0x0/0x4ffc00000, data 0x139810a/0x1472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.497683+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 10395648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.497938+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.498156+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ed000/0x0/0x4ffc00000, data 0x13b6889/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.324265480s of 11.515779495s, submitted: 35
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077896 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.498376+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.498697+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.498870+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 10321920 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5ee000/0x0/0x4ffc00000, data 0x13b67ee/0x1490000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.499097+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.499364+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 10207232 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086088 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.499634+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 10420224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.499940+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87195648 unmapped: 9232384 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa59e000/0x0/0x4ffc00000, data 0x1404cc1/0x14e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.500111+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9199616 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.500322+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86802432 unmapped: 9625600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.500573+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093304 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.500716+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 86958080 unmapped: 9469952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734145164s of 11.004765511s, submitted: 52
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.500953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa56b000/0x0/0x4ffc00000, data 0x1438f3c/0x1513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.501153+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87187456 unmapped: 9240576 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.501306+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.504416+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x147084c/0x154b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095534 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.504608+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.504812+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87261184 unmapped: 9166848 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.504969+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 9150464 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.505157+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.505371+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87392256 unmapped: 9035776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa50b000/0x0/0x4ffc00000, data 0x14983d4/0x1572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095530 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.505527+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.505677+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 87638016 unmapped: 8790016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.397135735s of 11.280517578s, submitted: 59
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.505871+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 7593984 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.506005+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 7479296 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4d0000/0x0/0x4ffc00000, data 0x14d321d/0x15ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.506209+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098050 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.506343+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88956928 unmapped: 7471104 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.506509+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 7462912 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.506775+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.506934+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.507083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ca000/0x0/0x4ffc00000, data 0x14d58e2/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099392 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.507367+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.507540+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.507735+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.507914+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.508095+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.184447289s of 12.378032684s, submitted: 30
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098094 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.508264+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.508420+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.508590+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.508762+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.508913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1098462 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.509078+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88481792 unmapped: 7946240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65800
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.509250+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 12
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.509414+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.509550+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88530944 unmapped: 7897088 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.509775+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.734287262s of 10.033769608s, submitted: 19
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa4cd000/0x0/0x4ffc00000, data 0x14d592e/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099814 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.510030+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.510414+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.510842+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.511054+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.511289+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.511510+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101084 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.511821+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.512009+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.512213+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.512394+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.512573+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100090 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359554291s of 11.549299240s, submitted: 18
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.512729+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 7864320 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.512877+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.513020+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.513213+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88596480 unmapped: 7831552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d57e8/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.513359+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.513544+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.513805+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.514459+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.514672+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d5816/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.514916+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100202 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.515138+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.515464+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.601215363s of 11.686765671s, submitted: 14
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.515669+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.515845+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.516030+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100378 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.516245+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bd000/0x0/0x4ffc00000, data 0x14d581b/0x15b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.516417+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.516586+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.516761+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.516906+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101794 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d58b5/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.517040+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.517210+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.517372+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.926100731s of 11.090178490s, submitted: 16
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.517533+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88416256 unmapped: 8011776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa0bc000/0x0/0x4ffc00000, data 0x14d5883/0x15b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.517690+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101618 data_alloc: 218103808 data_used: 274432
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88449024 unmapped: 7979008 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.517850+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88457216 unmapped: 7970816 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.518027+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.518195+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.518445+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.518613+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106954 data_alloc: 218103808 data_used: 282624
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d742f/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.518759+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88465408 unmapped: 7962624 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.518900+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.519056+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.519285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0b8000/0x0/0x4ffc00000, data 0x14d7530/0x15b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88473600 unmapped: 7954432 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.746568680s of 10.998859406s, submitted: 51
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.519445+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108006 data_alloc: 218103808 data_used: 282624
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.519532+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88489984 unmapped: 7938048 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.519752+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fa0ba000/0x0/0x4ffc00000, data 0x14d7400/0x15b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.519928+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88522752 unmapped: 7905280 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _renew_subs
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 145 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.520092+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.520178+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115008 data_alloc: 218103808 data_used: 290816
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.520321+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.520518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.520650+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88547328 unmapped: 7880704 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 147 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14daac0/0x15b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.520757+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88555520 unmapped: 7872512 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc543/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.520945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119046 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848780632s of 10.849118233s, submitted: 70
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.521093+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.521219+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.521440+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88571904 unmapped: 7856128 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.521678+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc608/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.521843+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119676 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 7839744 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.522014+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc541/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.522153+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88621056 unmapped: 7806976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.522329+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.522559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.522715+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119372 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.522887+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.558925629s of 10.797169685s, submitted: 23
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 7798784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.523007+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.523123+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.523307+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.523536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120626 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b2000/0x0/0x4ffc00000, data 0x14dc547/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.523708+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.523884+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88645632 unmapped: 7782400 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.524065+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 7766016 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.524228+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 ms_handle_reset con 0x55c27dd65800 session 0x55c27d401c20
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b3000/0x0/0x4ffc00000, data 0x14dc44b/0x15bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89284608 unmapped: 7143424 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.524376+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121304 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 13
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.524525+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 7127040 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.524677+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.979153633s of 11.168646812s, submitted: 206
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.524834+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.524996+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc511/0x15bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.525190+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120872 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.525318+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.525546+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.525696+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89317376 unmapped: 7110656 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.525852+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0af000/0x0/0x4ffc00000, data 0x14dc5e2/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.525964+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122944 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b0000/0x0/0x4ffc00000, data 0x14dc5ad/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.526106+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.526244+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.526389+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.424748421s of 10.796654701s, submitted: 27
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 7069696 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.526517+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.526636+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1123366 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa0b1000/0x0/0x4ffc00000, data 0x14dc5e1/0x15bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.526788+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 7077888 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.526945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89522176 unmapped: 6905856 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.527116+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.527300+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 89366528 unmapped: 7061504 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa083000/0x0/0x4ffc00000, data 0x1508cc2/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.527550+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130868 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90562560 unmapped: 5865472 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.527724+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 90677248 unmapped: 5750784 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.527909+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 5332992 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.528063+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa02d000/0x0/0x4ffc00000, data 0x155d2e5/0x163f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.528279+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.726997375s of 10.987854004s, submitted: 60
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91480064 unmapped: 4947968 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.528451+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142070 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91553792 unmapped: 4874240 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.528685+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 4759552 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.528843+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9ff1000/0x0/0x4ffc00000, data 0x159c2c8/0x167d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.529015+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 4734976 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.529168+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.529359+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143868 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 3481600 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f97000/0x0/0x4ffc00000, data 0x15f5aa9/0x16d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.529526+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93011968 unmapped: 3416064 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.529673+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93282304 unmapped: 3145728 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.529829+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.530031+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93102080 unmapped: 3325952 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.212817192s of 10.548931122s, submitted: 84
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.530164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151538 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92823552 unmapped: 3604480 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.530344+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9f0a000/0x0/0x4ffc00000, data 0x1682913/0x1763000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92831744 unmapped: 3596288 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.530547+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 92864512 unmapped: 3563520 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.530742+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.530887+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94117888 unmapped: 2310144 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.531015+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158170 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2383872 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.531157+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94208000 unmapped: 2220032 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9eb9000/0x0/0x4ffc00000, data 0x16d2996/0x17b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.531336+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93757440 unmapped: 2670592 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.531529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93806592 unmapped: 2621440 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.531679+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e87000/0x0/0x4ffc00000, data 0x1705e04/0x17e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.531826+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154906 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93986816 unmapped: 2441216 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.826278687s of 11.162016869s, submitted: 70
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e79000/0x0/0x4ffc00000, data 0x1714710/0x17f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.531999+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 93995008 unmapped: 2433024 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.532174+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e56000/0x0/0x4ffc00000, data 0x1736fbe/0x1817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94191616 unmapped: 2236416 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.532333+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94199808 unmapped: 2228224 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.532593+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e42000/0x0/0x4ffc00000, data 0x174b9d7/0x182c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1179648 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.532745+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161486 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.532913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.533459+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 1163264 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.533702+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9e15000/0x0/0x4ffc00000, data 0x17766f9/0x1858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94511104 unmapped: 1916928 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9dfa000/0x0/0x4ffc00000, data 0x17933f1/0x1874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,3])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.534642+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94560256 unmapped: 1867776 heap: 96428032 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.534945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170682 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 2760704 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.535300+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.836094856s of 10.913021088s, submitted: 68
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.536094+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2367488 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.536285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d8f000/0x0/0x4ffc00000, data 0x17fcccc/0x18de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 95117312 unmapped: 2359296 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.536920+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96043008 unmapped: 1433600 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.537453+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172456 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.538018+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96075776 unmapped: 1400832 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:16.538345+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d5a000/0x0/0x4ffc00000, data 0x1832e2f/0x1913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.538537+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 1212416 heap: 97476608 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.538686+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.538894+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180160 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96534528 unmapped: 1990656 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.539082+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9d05000/0x0/0x4ffc00000, data 0x1887752/0x1968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96067584 unmapped: 2457600 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.539278+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.346602440s of 10.624962807s, submitted: 67
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96133120 unmapped: 2392064 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.539518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cf2000/0x0/0x4ffc00000, data 0x189b875/0x197c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.539673+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 8915 writes, 34K keys, 8915 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 8915 writes, 2241 syncs, 3.98 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3166 writes, 10K keys, 3166 commit groups, 1.0 writes per commit group, ingest: 14.20 MB, 0.02 MB/s
                                           Interval WAL: 3166 writes, 1329 syncs, 2.38 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2383872 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.539839+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179516 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96264192 unmapped: 2260992 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.540031+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cc2000/0x0/0x4ffc00000, data 0x18ca961/0x19ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96321536 unmapped: 2203648 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.540223+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.540369+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96411648 unmapped: 2113536 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.540566+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 2039808 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.540749+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.540911+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 2031616 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc ms_handle_reset ms_handle_reset con 0x55c27c775400
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: get_auth_request con 0x55c27dd65800 auth_method 0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.541055+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.541208+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.541388+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.541568+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178798 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96657408 unmapped: 1867776 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.541701+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc513/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.670079231s of 13.884990692s, submitted: 22
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.541845+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.541981+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96665600 unmapped: 1859584 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.542148+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.542287+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177262 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.542519+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.542681+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.542830+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.543039+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.543249+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc4e6/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.543442+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.543612+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96673792 unmapped: 1851392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.543777+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.543953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.544090+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.544264+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.544428+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.544593+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.545035+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.545307+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176092 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.545583+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.545739+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.552337646s of 21.707801819s, submitted: 8
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.545901+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.546075+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.546324+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176268 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.546553+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.546810+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.546973+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.547158+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96681984 unmapped: 1843200 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.547360+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177860 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.547536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.547755+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.547944+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.548163+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.837540627s of 11.850649834s, submitted: 3
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.548349+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5c6/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179404 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.548559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.548733+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 1826816 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.548900+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.549096+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.549355+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177186 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.549523+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96706560 unmapped: 1818624 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.549696+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.549862+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.550061+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.550214+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178762 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.550375+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.550584+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.027555466s of 13.073743820s, submitted: 11
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.550794+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 1892352 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.550967+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.551135+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180354 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.551348+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.551530+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.551721+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.551904+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.552056+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96641024 unmapped: 1884160 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179664 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.552258+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.552533+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.552710+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 96649216 unmapped: 1875968 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.837194443s of 10.898418427s, submitted: 7
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.552821+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97697792 unmapped: 827392 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.552968+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97624064 unmapped: 901120 heap: 98525184 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184054 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.553114+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.553333+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc750/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.553507+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 892928 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.553699+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.553870+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97632256 unmapped: 1941504 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185166 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc74e/0x19bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.554040+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 1933312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.554230+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.554415+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.554591+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.554714+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 1925120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.652610779s of 11.716604233s, submitted: 16
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185554 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.554895+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 97705984 unmapped: 1867776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.555123+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.555285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 933888 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.555572+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.555739+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186584 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.555874+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.556005+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b5/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.556168+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.556330+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 925696 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18dc6b3/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.556562+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185590 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.556719+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 909312 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.556911+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.021935463s of 12.413156509s, submitted: 105
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.557113+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.557395+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.557654+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186668 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.557897+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.558071+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb1000/0x0/0x4ffc00000, data 0x18dc5ec/0x19bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.558232+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.558435+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.558621+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185978 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.558761+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98672640 unmapped: 901120 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.558945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb2000/0x0/0x4ffc00000, data 0x18dc551/0x19bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.559097+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.559358+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 heartbeat osd_stat(store_statfs(0x4f9cb3000/0x0/0x4ffc00000, data 0x18dc4b6/0x19bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.559588+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185128 data_alloc: 218103808 data_used: 299008
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.559752+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.800412178s of 13.922379494s, submitted: 10
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.560021+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.560168+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 860160 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.560350+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.560540+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189286 data_alloc: 218103808 data_used: 307200
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.560819+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98721792 unmapped: 851968 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.561049+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 843776 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.561211+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9caf000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.561397+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.561589+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98762752 unmapped: 811008 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189462 data_alloc: 218103808 data_used: 307200
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.561762+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.561896+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.562038+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 802816 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 149 heartbeat osd_stat(store_statfs(0x4f9cb0000/0x0/0x4ffc00000, data 0x18de09c/0x19be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.972100258s of 12.103911400s, submitted: 26
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.562213+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.562403+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 794624 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192404 data_alloc: 218103808 data_used: 315392
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.562584+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.562740+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 786432 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 14
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: handle_auth_request added challenge on 0x55c27dd65000
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.562895+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfaff/0x19c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.563077+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc11/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 98811904 unmapped: 761856 heap: 99573760 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.563240+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 761856 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193468 data_alloc: 218103808 data_used: 315392
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.563394+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.563567+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.563742+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.563904+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.564083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193644 data_alloc: 218103808 data_used: 315392
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.564213+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.880873680s of 12.922811508s, submitted: 19
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.564412+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99737600 unmapped: 884736 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cac000/0x0/0x4ffc00000, data 0x18dfb9a/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.564584+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.564807+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18dfcd0/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.564974+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 868352 heap: 100622336 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198760 data_alloc: 218103808 data_used: 315392
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.565111+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.565272+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.565424+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99753984 unmapped: 1916928 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 heartbeat osd_stat(store_statfs(0x4f9cab000/0x0/0x4ffc00000, data 0x18dfc35/0x19c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.565587+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.565788+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.565940+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197700 data_alloc: 218103808 data_used: 323584
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.566202+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.758671761s of 10.983880043s, submitted: 61
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.566394+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.566687+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.566876+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.567005+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197876 data_alloc: 218103808 data_used: 323584
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99778560 unmapped: 1892352 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.567164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 151 heartbeat osd_stat(store_statfs(0x4f9caa000/0x0/0x4ffc00000, data 0x18e1715/0x19c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.567324+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99786752 unmapped: 1884160 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.567591+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.567759+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 1875968 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.567942+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201698 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.568117+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.568254+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.046489716s of 11.064700127s, submitted: 14
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.568409+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.568551+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99803136 unmapped: 1867776 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.568683+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202762 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.568809+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.569112+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99811328 unmapped: 1859584 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.569772+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca6000/0x0/0x4ffc00000, data 0x18e3233/0x19c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.569920+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.570070+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202586 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.570227+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99819520 unmapped: 1851392 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 heartbeat osd_stat(store_statfs(0x4f9ca7000/0x0/0x4ffc00000, data 0x18e3198/0x19c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.570668+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.570850+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.863492966s of 10.986426353s, submitted: 33
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.571000+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99827712 unmapped: 1843200 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.571146+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 153 heartbeat osd_stat(store_statfs(0x4f9ca3000/0x0/0x4ffc00000, data 0x18e4d7e/0x19ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206958 data_alloc: 218103808 data_used: 339968
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 1810432 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.571284+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.571536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99868672 unmapped: 1802240 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.571743+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.571921+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.572186+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214634 data_alloc: 218103808 data_used: 352256
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.572356+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.572503+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.572666+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.573063+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.573411+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9c000/0x0/0x4ffc00000, data 0x18e8559/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214978 data_alloc: 218103808 data_used: 352256
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99885056 unmapped: 1785856 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.761722565s of 12.017519951s, submitted: 41
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.573691+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 1761280 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.573908+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99926016 unmapped: 1744896 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 155 heartbeat osd_stat(store_statfs(0x4f9c9d000/0x0/0x4ffc00000, data 0x18e84be/0x19d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.574052+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.575121+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.575399+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223380 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.575642+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99958784 unmapped: 1712128 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.575826+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c95000/0x0/0x4ffc00000, data 0x18ebca8/0x19d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.576344+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99966976 unmapped: 1703936 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.576513+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.576734+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220608 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.576945+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.814929008s of 10.932528496s, submitted: 43
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.577165+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 157 heartbeat osd_stat(store_statfs(0x4f9c98000/0x0/0x4ffc00000, data 0x18ebaf8/0x19d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99975168 unmapped: 1695744 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.577402+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.577614+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.577806+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224766 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.578030+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.578234+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.578431+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.578571+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.578708+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226534 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.578875+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.579050+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 99999744 unmapped: 1671168 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.579211+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.579380+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.052343369s of 13.091160774s, submitted: 15
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.579600+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1225494 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.579765+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.580064+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.580307+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.580500+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 614400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c93000/0x0/0x4ffc00000, data 0x18ed6b1/0x19db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.580659+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 heartbeat osd_stat(store_statfs(0x4f9c94000/0x0/0x4ffc00000, data 0x18ed616/0x19da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226380 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.580855+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100007936 unmapped: 1662976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.581033+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.581379+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.581530+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.581660+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228808 data_alloc: 218103808 data_used: 376832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.581822+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.581982+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.582175+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100032512 unmapped: 1638400 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.582321+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.807965279s of 14.139179230s, submitted: 58
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.582539+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229000 data_alloc: 218103808 data_used: 376832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.582716+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.582885+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100040704 unmapped: 1630208 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.583062+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 159 heartbeat osd_stat(store_statfs(0x4f9c92000/0x0/0x4ffc00000, data 0x18ef161/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 159 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.583198+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100114432 unmapped: 1556480 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 161 heartbeat osd_stat(store_statfs(0x4f9c8a000/0x0/0x4ffc00000, data 0x18f27c6/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.583354+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236804 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.583529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100147200 unmapped: 1523712 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.583666+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100155392 unmapped: 1515520 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.583803+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.583942+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.584086+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 162 heartbeat osd_stat(store_statfs(0x4f9c88000/0x0/0x4ffc00000, data 0x18f43ac/0x19e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239106 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.630488396s of 11.893076897s, submitted: 66
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.584209+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.584331+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100171776 unmapped: 1499136 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.584453+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.584595+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100188160 unmapped: 1482752 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.584724+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242096 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 163 heartbeat osd_stat(store_statfs(0x4f9c85000/0x0/0x4ffc00000, data 0x18f5e0f/0x19e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.584856+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100196352 unmapped: 1474560 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.584974+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.585130+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.585296+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.585520+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245550 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.585676+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c82000/0x0/0x4ffc00000, data 0x18f7a25/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.585848+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.243680000s of 11.435800552s, submitted: 63
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100212736 unmapped: 1458176 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.585988+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 164 heartbeat osd_stat(store_statfs(0x4f9c81000/0x0/0x4ffc00000, data 0x18f7ac0/0x19ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.586130+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.586294+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249940 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100220928 unmapped: 1449984 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.586449+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100229120 unmapped: 1441792 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.586557+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.586705+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100237312 unmapped: 1433600 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.586861+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 166 heartbeat osd_stat(store_statfs(0x4f9c7b000/0x0/0x4ffc00000, data 0x18fb08e/0x19f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.587037+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255022 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100261888 unmapped: 1409024 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.587172+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x18fcca4/0x19f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.587287+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.587417+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.587608+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100270080 unmapped: 1400832 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.215492249s of 12.445914268s, submitted: 69
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.587755+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258316 data_alloc: 218103808 data_used: 401408
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.588313+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.588414+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.588645+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 168 heartbeat osd_stat(store_statfs(0x4f9c76000/0x0/0x4ffc00000, data 0x18fe727/0x19f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.588822+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.589003+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258492 data_alloc: 218103808 data_used: 401408
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.589150+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 1392640 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.589340+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100286464 unmapped: 1384448 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.589565+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 169 ms_handle_reset con 0x55c27dd65000 session 0x55c27f3a21e0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 1048576 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.589705+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 15
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.589863+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261466 data_alloc: 218103808 data_used: 401408
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.590039+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.873164177s of 11.996927261s, submitted: 252
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9c73000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.590218+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.590400+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.590598+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.590754+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260762 data_alloc: 218103808 data_used: 401408
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.590895+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.591083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 169 heartbeat osd_stat(store_statfs(0x4f9864000/0x0/0x4ffc00000, data 0x190030d/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.591252+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.591398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.591624+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.591858+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.592035+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.592266+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.592445+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.592532+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.592714+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.592908+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.593067+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.593255+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.593469+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.593639+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.593793+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.594029+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.594232+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.594435+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.594586+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.595046+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.595253+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.595398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.595553+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.595723+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.595900+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.596071+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.596214+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.596397+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.596651+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.596816+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.597010+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.597173+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.597399+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.597561+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.597795+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.599785+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.601301+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.602303+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 1024000 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.602648+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.603077+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.603836+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.604330+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.604581+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 1015808 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.605444+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.605548+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.606093+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9860000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.606350+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.606867+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264584 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.607145+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.607311+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.607868+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 1007616 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.363555908s of 56.390811920s, submitted: 15
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 ms_handle_reset con 0x55c27dd65400 session 0x55c27c84cd20
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.608086+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Got map version 16
Nov 22 06:10:13 compute-0 ceph-osd[91881]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.608395+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.608573+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.608741+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.608888+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.609031+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.609154+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.609279+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.609448+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.609604+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.609766+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.609910+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.610075+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.610228+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.610392+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.610549+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.610750+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.611039+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.611551+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.611987+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.612389+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.612522+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.613002+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.613207+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.613398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.613569+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.613691+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.613827+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.613961+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 696320 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.614083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 638976 heap: 101670912 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.614209+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1933312 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.614332+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2179072 heap: 102719488 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.614529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 111198208 unmapped: 2564096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf dump' '{prefix=perf dump}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:17.614686+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf schema' '{prefix=perf schema}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100278272 unmapped: 13484032 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:18.614829+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:19.614943+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:20.615119+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:21.615275+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:22.615401+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:23.615536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:24.615668+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:25.615858+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:26.616517+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:27.616638+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:28.616772+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:29.616890+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:30.617013+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100319232 unmapped: 13443072 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:31.617161+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:32.617781+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:33.617927+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:34.618056+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:35.618170+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:36.618285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:37.618565+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:38.618660+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:39.618853+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:40.618965+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:41.619123+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:42.619254+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:43.619399+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:44.619532+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:45.619680+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:46.619823+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 13426688 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:47.620049+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:48.620240+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:49.620422+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:50.620687+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:51.620927+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:52.621036+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:53.621143+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:54.621288+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100343808 unmapped: 13418496 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:55.621449+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:56.621668+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:57.621943+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:58.622212+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:59.622442+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:00.622604+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:01.622839+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:02.623098+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:03.623313+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:04.623452+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100352000 unmapped: 13410304 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:05.623545+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:06.623680+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:07.623903+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:08.624556+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:09.624704+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:10.624875+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:11.625005+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:12.625257+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:13.625587+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:14.625824+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:15.625992+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:16.626190+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:17.626348+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100360192 unmapped: 13402112 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:18.626544+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:19.626698+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:20.626839+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:21.626980+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 13393920 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:22.627165+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:23.627370+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:24.627556+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:25.627713+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:26.627868+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:27.628051+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:28.628251+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:29.628390+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:30.628540+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:31.628670+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:32.628835+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:33.628992+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:34.629157+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100376576 unmapped: 13385728 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:35.629285+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:36.629431+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:37.629576+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:38.629789+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:39.629943+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:40.630165+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:41.630348+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:42.630570+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100384768 unmapped: 13377536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:43.630729+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:44.630940+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:45.631139+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:46.631314+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:47.631539+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:48.631733+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:49.631913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:50.632165+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:51.632294+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:52.632569+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:53.632726+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:54.632934+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:55.633127+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:56.633289+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:57.633469+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:58.633851+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100392960 unmapped: 13369344 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:59.634043+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:00.634229+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:01.634410+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:02.634593+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:03.634880+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:04.635164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:05.635377+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:06.635643+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100401152 unmapped: 13361152 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:07.635995+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:08.636337+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:09.636830+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:10.637125+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:11.637387+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:12.638053+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:13.638554+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:14.639272+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100409344 unmapped: 13352960 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:15.639684+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:16.639982+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:17.640193+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:18.640585+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:19.640772+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:20.640970+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:21.641207+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:22.641382+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:23.641529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:24.641727+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:25.641977+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:26.642173+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:27.642342+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:28.642536+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:29.642719+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:30.642889+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:31.643064+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:32.643233+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:33.643559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:34.643698+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:35.643869+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:36.644093+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:37.644226+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100417536 unmapped: 13344768 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:38.644411+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:39.644603+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:40.644780+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100425728 unmapped: 13336576 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:41.644991+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:42.645368+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:43.645531+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:44.645721+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:45.645912+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:46.646134+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:47.646357+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:48.646621+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:49.646792+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:50.647057+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:51.647354+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:52.647545+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:53.647782+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:54.648055+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:55.648248+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:56.648412+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:57.648591+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:58.648874+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:59.649088+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:00.649298+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 13328384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:01.649530+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100450304 unmapped: 13312000 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:02.649692+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100450304 unmapped: 13312000 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:03.649904+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:04.650047+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:05.650201+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:06.650365+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:07.650554+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:08.650746+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:09.650890+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:10.651164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:11.651402+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:12.651540+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100458496 unmapped: 13303808 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:13.651741+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:14.651901+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:15.652118+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:16.652237+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100466688 unmapped: 13295616 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:17.652436+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 13287424 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:18.652650+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100474880 unmapped: 13287424 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:19.652802+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 13279232 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:20.652953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:21.653160+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:22.653308+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:23.653450+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:24.653621+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:25.653757+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:26.653937+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:27.654096+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:28.654313+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:29.654466+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:30.654639+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:31.654870+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:32.655033+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:33.655205+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:34.655398+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:35.655523+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 13262848 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:36.655702+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:37.655999+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:38.656249+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:39.656403+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:40.656628+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100515840 unmapped: 13246464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:41.656874+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:42.657079+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:43.657233+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:44.657404+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:45.657562+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:46.657960+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:47.658272+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:48.658446+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:49.658580+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:50.658850+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100532224 unmapped: 13230080 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:51.658997+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:52.659149+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:53.659304+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:54.659606+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:55.659837+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:56.660023+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:57.660292+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:58.660747+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:59.660972+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:00.661141+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 13221888 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:01.661336+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100556800 unmapped: 13205504 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:02.661677+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:03.661936+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:04.662164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:05.662325+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:06.662549+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:07.662720+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:08.662903+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:09.663083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100564992 unmapped: 13197312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:10.663298+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:11.663446+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:12.663644+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:13.663916+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:14.664126+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:15.664284+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:16.664527+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:17.664749+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:18.664967+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:19.665127+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:20.665308+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100573184 unmapped: 13189120 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:21.665517+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:22.665779+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 2769 syncs, 3.81 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1633 writes, 3817 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 2.01 MB, 0.00 MB/s
                                           Interval WAL: 1633 writes, 528 syncs, 3.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:23.665927+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:24.666137+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:25.666299+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:26.666518+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:27.666693+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:28.666925+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:29.667076+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:30.699034+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100589568 unmapped: 13172736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:31.699189+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100597760 unmapped: 13164544 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:32.699412+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:33.699608+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:34.699770+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:35.699935+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:36.700061+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:37.700235+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:38.700437+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:39.700594+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:40.701183+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100605952 unmapped: 13156352 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:41.701325+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:42.701503+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:43.701701+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:44.701856+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:45.702225+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:46.702387+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:47.702557+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:48.702800+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:49.702965+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100622336 unmapped: 13139968 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:50.703164+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:51.703335+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:52.703463+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:53.703589+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:54.703754+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:55.703913+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:56.704061+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:57.704230+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:58.704444+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:59.704586+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:00.704732+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 13131776 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:01.704885+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:02.705102+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:03.705293+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:04.705445+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:05.705629+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100646912 unmapped: 13115392 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:06.705814+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:07.705977+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:08.706221+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:09.706401+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:10.706562+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100655104 unmapped: 13107200 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:11.706767+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:12.706956+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:13.707145+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100663296 unmapped: 13099008 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:14.707266+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:15.707463+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:16.707655+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:17.707803+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:18.708036+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:19.708234+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:20.708402+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100679680 unmapped: 13082624 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:21.708548+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:22.708710+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:23.709820+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:24.710718+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:25.711026+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:26.712560+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100696064 unmapped: 13066240 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:27.712748+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:28.714096+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:29.714266+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:30.714516+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:31.714675+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:32.714852+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:33.715047+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:34.715251+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100704256 unmapped: 13058048 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:35.715521+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:36.715903+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:37.716086+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:38.716415+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:39.716575+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:40.716773+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100712448 unmapped: 13049856 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:41.716955+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:42.717351+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263880 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:43.717547+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100728832 unmapped: 13033472 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:44.717846+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 361.382568359s of 361.418334961s, submitted: 158
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100745216 unmapped: 13017088 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:45.718000+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:46.718168+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:47.718316+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:48.718545+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:49.718726+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:50.718905+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:51.719057+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:52.719214+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:53.719326+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:54.719556+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:55.719748+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:56.721512+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:57.721931+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100843520 unmapped: 12918784 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:58.724038+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:59.724529+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:00.725015+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:01.725415+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:02.725637+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:03.725751+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:04.725939+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:05.726098+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:06.726293+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100851712 unmapped: 12910592 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:07.726453+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:08.726764+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:09.726894+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:10.727169+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:11.727309+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:12.727549+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100859904 unmapped: 12902400 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:13.727925+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:14.728165+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:15.728308+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:16.728550+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:17.728691+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:18.728970+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:19.729142+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:20.729316+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:21.729618+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:22.729856+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:23.729986+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:24.730179+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:25.730335+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:26.730567+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:27.730746+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:28.730908+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:29.731016+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:30.731199+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100876288 unmapped: 12886016 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:31.731303+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:32.731575+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:33.731769+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:34.731889+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:35.732032+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:36.732172+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:37.732340+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:38.733629+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:39.733729+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:40.733855+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:41.734009+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:42.734233+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:43.734425+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:44.734657+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:45.734815+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:46.734934+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:47.735083+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:48.735321+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:49.735535+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:50.735711+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:51.735892+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:52.736136+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:53.736354+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:54.736594+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100884480 unmapped: 12877824 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:55.736879+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:56.737076+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:57.737291+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:58.737550+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:59.737754+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:00.738131+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100892672 unmapped: 12869632 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:01.738454+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:02.739643+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:03.740091+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:04.740559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:05.740726+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:06.741782+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:07.741971+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:08.743325+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:09.743617+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:10.743856+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100909056 unmapped: 12853248 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:11.744052+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:12.744192+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:13.744395+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100925440 unmapped: 12836864 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:14.744597+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:15.744734+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:16.745254+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:17.745434+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:18.745626+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:19.745840+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:20.746196+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100941824 unmapped: 12820480 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:21.746358+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:22.746524+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:23.746708+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:24.746834+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:25.746969+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:26.747227+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:27.747407+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:28.747743+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:29.747953+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:30.748151+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:31.748316+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100958208 unmapped: 12804096 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:32.748559+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:33.748732+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:34.748842+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:35.748971+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:36.749097+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:37.749247+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100966400 unmapped: 12795904 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:38.749416+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:39.749550+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:40.749714+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100974592 unmapped: 12787712 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:41.749865+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:42.750044+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:43.750185+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:44.750323+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:45.750451+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 12771328 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:46.750626+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:47.750776+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:48.751005+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:49.751145+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:50.751240+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:51.751366+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:52.751585+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:53.751730+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:54.751875+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:55.752274+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:56.752402+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 100999168 unmapped: 12763136 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:57.752555+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:58.752760+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:59.752910+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:00.753059+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101007360 unmapped: 12754944 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:01.753213+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:02.753383+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:03.753577+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:04.753820+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:05.754426+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:06.754984+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101023744 unmapped: 12738560 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:07.755463+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:08.756123+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:09.756525+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101031936 unmapped: 12730368 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:10.757044+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:11.757535+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:12.758097+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:13.758330+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:14.758557+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:15.758863+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:16.759284+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:17.759625+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:18.759910+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:19.760082+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:20.760323+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 12722176 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:21.760556+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:22.760779+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:23.761016+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:24.761223+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:25.761431+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:26.761665+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:27.761905+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:28.762122+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:29.762327+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:30.762502+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:31.762655+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:32.762776+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:33.763096+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:34.763464+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:35.763712+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:36.763841+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:37.763954+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:13 compute-0 ceph-osd[91881]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:13 compute-0 ceph-osd[91881]: bluestore.MempoolThread(0x55c27a0e5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263704 data_alloc: 218103808 data_used: 409600
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:38.764086+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101056512 unmapped: 12705792 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: osd.2 170 heartbeat osd_stat(store_statfs(0x4f9861000/0x0/0x4ffc00000, data 0x1901d70/0x19fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:39.764202+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101064704 unmapped: 12697600 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:40.764329+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101539840 unmapped: 12222464 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:41.764499+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 12517376 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: tick
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_tickets
Nov 22 06:10:13 compute-0 ceph-osd[91881]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:42.764643+0000)
Nov 22 06:10:13 compute-0 ceph-osd[91881]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 06:10:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256331948' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14881 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 06:10:13 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725595137' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.14869 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1660957875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.14877 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: pgmap v1524: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1256331948' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2725595137' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 06:10:13 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14885 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 06:10:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407678340' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:10:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14889 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:14 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 06:10:14 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/365768524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 06:10:14 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14893 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:15 compute-0 ceph-mon[75840]: from='client.14881 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mon[75840]: from='client.14885 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2407678340' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/365768524' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14899 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:10:15 compute-0 ceph-13fdadc6-d566-5465-9ac8-a148ef130da1-mgr-compute-0-mscchl[76130]: 2025-11-22T06:10:15.488+0000 7f536ac43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 06:10:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 06:10:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946734465' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 06:10:15 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 06:10:15 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208711715' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 06:10:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660098652' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: from='client.14889 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: from='client.14893 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: pgmap v1525: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1946734465' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/208711715' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1660098652' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 06:10:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451135040' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 06:10:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310310766' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 06:10:16 compute-0 crontab[292185]: (root) LIST (root)
Nov 22 06:10:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 06:10:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/286854755' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 06:10:16 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 06:10:16 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642374162' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 06:10:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1239950706' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 06:10:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939297151' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.14899 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2451135040' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2310310766' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/286854755' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3642374162' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1239950706' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: pgmap v1526: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:17 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2939297151' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 06:10:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579627788' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 06:10:17 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 06:10:17 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199070274' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 1277952 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.598602+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.598734+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 1269760 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.598881+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.599031+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.599157+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 1261568 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.599305+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.599504+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.599645+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 1253376 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.599829+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.600002+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 1245184 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 315.175384521s of 315.207031250s, submitted: 8
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.600167+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 1236992 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.600308+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.600442+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.600550+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.600688+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.600851+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.601018+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.601517+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.601820+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 1212416 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.602397+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.602671+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75284480 unmapped: 1204224 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.603012+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.603253+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.603408+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75292672 unmapped: 1196032 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.603660+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75300864 unmapped: 1187840 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.603803+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.603990+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.604181+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 1179648 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.604428+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.604566+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75317248 unmapped: 1171456 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.604691+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.605023+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75333632 unmapped: 1155072 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.605172+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.605325+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75341824 unmapped: 1146880 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.605455+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.605621+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.605763+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 1138688 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.605871+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.606018+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.606182+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 1130496 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.606345+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.606592+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 1122304 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.606780+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.607019+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.607221+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.607557+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.607736+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.607862+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.607986+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.608123+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.608244+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.608366+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.608502+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.608635+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.608761+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75374592 unmapped: 1114112 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.609006+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.609406+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.609562+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.609763+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.609910+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.610048+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.610213+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.610421+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.610585+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.610690+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.610873+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.611106+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.611277+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.611444+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.611542+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.611694+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.611825+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.611982+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.612222+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.612500+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75382784 unmapped: 1105920 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.612749+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.613025+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.613275+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.613527+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.613759+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.614005+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75390976 unmapped: 1097728 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.614228+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.614376+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.614579+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.614744+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.614979+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.615208+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.615367+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.615541+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.615669+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.615799+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.615941+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.616115+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.616238+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.616412+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.616574+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.616762+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.616961+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.617111+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.617259+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.617546+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.617712+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.617848+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.617987+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75399168 unmapped: 1089536 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.618180+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.618339+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.618555+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.618707+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.618858+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.619025+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.619245+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.619433+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.619591+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.619723+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.619899+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.620036+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.620227+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.620399+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.620560+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.620714+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 1081344 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.620852+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.620978+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.621077+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.621212+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.621364+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.621540+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.621737+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.621891+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.622076+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.622209+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.622346+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.622507+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.622642+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.622794+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.623127+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.623271+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.623439+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.623559+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.623696+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.623854+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.624009+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.624129+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.624309+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.624507+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75423744 unmapped: 1064960 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.624688+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.624805+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.625010+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.625236+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.625353+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.625511+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.625656+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.625779+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.625940+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.626112+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.626284+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.626447+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.626755+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.626952+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.627088+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.627256+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.627405+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.627575+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.627731+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.627908+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.628088+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.628222+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.628400+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.628570+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.628710+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.628845+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.629022+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.629161+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.629280+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.629497+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.629611+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 1056768 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.629734+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 1048576 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.629878+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.629998+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.630145+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.630334+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.630556+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.630719+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.630875+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.630997+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.631157+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.631316+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.631500+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.631610+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.631724+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.632454+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.632699+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.632859+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.632986+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.633126+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.633269+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.633418+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.633633+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.633802+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.633968+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.634085+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.634247+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.634399+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75448320 unmapped: 1040384 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.634575+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.634766+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.634917+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.635051+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.635226+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.635374+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.635549+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.635711+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.635879+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.636029+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.636176+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.636328+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.636461+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.636673+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.636832+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.636961+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.637092+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.637210+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.637405+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.637562+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1032192 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.637745+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.637892+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.638075+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.638215+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.638372+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.638538+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.638650+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.638780+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1024000 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: mgrc ms_handle_reset ms_handle_reset con 0x55e99eb0fc00
Nov 22 06:10:17 compute-0 ceph-osd[90784]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:10:17 compute-0 ceph-osd[90784]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: get_auth_request con 0x55e99ff2bc00 auth_method 0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.639125+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 802816 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e99f657800 session 0x55e99f863680
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a038a000
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 ms_handle_reset con 0x55e9a038a400 session 0x55e99ff2c000
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657800
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.639247+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.639395+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.639562+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.639709+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.639846+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.640029+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.640144+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.640268+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.640395+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.640522+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.640826+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.641005+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.641124+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.641287+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.641409+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.641963+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.642151+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.642692+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.642843+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.643004+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.643151+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.643305+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.643509+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.643646+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 794624 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.643790+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.643991+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.644109+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.644397+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.644668+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.644846+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.644987+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.645217+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.645385+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.645558+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 786432 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.645700+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.645849+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.646126+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.646333+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.646493+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.646686+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.646858+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.647053+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.647207+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 778240 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.647384+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.647586+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.648427+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.648575+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.648702+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.648834+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.648981+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.649112+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.649287+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.649400+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.649566+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.649715+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.649895+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.650061+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.650214+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.650421+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.650586+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.650758+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.651005+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.651169+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.651384+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.651616+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.651915+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.652127+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.652356+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.652568+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.652724+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.653417+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.653588+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.653786+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.653902+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.654055+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.654226+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.654456+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.654637+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.654803+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 770048 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.661059+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.661234+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 761856 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.661403+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.661576+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.661686+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.661832+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.662019+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.662234+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.662402+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.662557+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.662884+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.663149+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.663317+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.663466+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.663683+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.663869+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 753664 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.664030+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.664228+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.664360+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.664546+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.664725+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.664879+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.665033+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.665175+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.665330+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.665514+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.665757+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.665900+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.666046+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.666211+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.666326+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.666457+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.666705+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.666880+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.666993+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.667122+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.667283+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.667418+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.667582+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.667707+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 745472 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.667835+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.668001+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.668175+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.668362+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.668534+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 737280 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.668723+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.668949+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.669097+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.669254+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.669373+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.669554+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.669744+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.669906+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.670072+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.670266+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.670445+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.670641+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.670767+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.670893+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.671048+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.671208+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.671348+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.671541+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.671696+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.671851+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.672158+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.672295+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.672443+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.672654+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.672832+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.672972+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.673111+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 729088 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.673284+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.673434+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.673574+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 720896 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.673697+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.673845+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.673985+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.674138+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.674272+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.674439+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.674633+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.674788+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.674951+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.675091+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 712704 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.675217+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.675370+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.675542+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.675690+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.675882+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 704512 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.676142+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.676306+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.676518+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.676653+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.676783+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.676904+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.677087+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.677260+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.677396+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.677549+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.677764+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.677894+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.678055+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.678211+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.678389+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.678549+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75792384 unmapped: 696320 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.678714+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.678866+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.679074+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.679282+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.679460+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.679667+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.679797+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.679974+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.680141+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.680314+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.680548+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.680748+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.680968+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.681147+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.681351+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.681486+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.681660+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.681800+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.681978+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.682228+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.682404+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.682580+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.682787+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.682948+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.683128+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.683303+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.683455+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.683626+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.683830+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 688128 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.683971+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.684146+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.684339+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 679936 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.684514+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.684636+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.684761+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.684890+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.685185+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.685339+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.685487+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.685598+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.685728+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.685893+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.686066+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.686164+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.686308+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.686439+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.686551+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.686811+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.687012+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:20.687155+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:21.687291+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.687444+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.687693+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.687848+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.688022+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.688173+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.688366+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.688534+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.688703+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.688869+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.689527+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.689700+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.689840+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.689952+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 671744 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.690179+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.690384+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.690574+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.690743+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.690969+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.691162+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.691369+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.691501+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.691675+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.691838+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.691990+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.692184+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.692336+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.692566+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.692795+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.692964+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.693161+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.693318+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.693465+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.693667+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.693831+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.693920+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.694114+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.694308+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.694507+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.694637+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.694856+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 663552 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.695040+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.695197+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.695344+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.695539+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.695702+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.695883+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.696322+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.696442+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.696590+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.696781+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.696915+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.697029+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.697147+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.697302+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.697455+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 6951 writes, 28K keys, 6951 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6951 writes, 1245 syncs, 5.58 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.05              0.00         1    0.054       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.09              0.00         1    0.093       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc7090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.014       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e99dcc71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.697673+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.697822+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.697933+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.698163+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.698378+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.698531+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.698680+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.698882+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.699955+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.700578+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.701008+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.701768+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.702433+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.703055+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.703267+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.703568+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.704024+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.704310+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.704698+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.705003+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 655360 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.705260+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.705425+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.705612+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.705882+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.706189+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.706409+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.706604+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.706849+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.707037+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.707252+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.707424+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.707647+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.707846+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.708082+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.708374+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.708559+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.708848+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.709059+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.709306+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.709595+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.709857+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.710051+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.710217+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.710391+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 647168 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.710618+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.710818+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.711004+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.711185+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.711513+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.711699+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.711859+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.712101+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.712312+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.712549+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.712766+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.712955+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.713127+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.713266+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.713419+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.713580+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.713709+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.713858+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.714000+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.714163+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.714364+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.714549+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.714670+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.714802+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.714980+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.715184+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.715346+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.715547+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.715743+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.715915+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.716214+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.716432+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 638976 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.716589+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.716717+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.716872+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.717016+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.717145+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.717322+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.717559+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.717745+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.717969+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.718179+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.718358+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.718564+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.096008301s of 600.027343750s, submitted: 90
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 630784 heap: 76488704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.718733+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.718917+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.719100+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.719277+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.719461+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.719675+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.719899+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.720090+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.720246+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.720401+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.720601+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.720785+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.720966+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.721099+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.721257+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.721438+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.721695+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.722076+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.722259+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.722420+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.722592+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.722752+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.722865+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.723124+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.723300+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.723463+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.723729+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.723895+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.724084+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.724259+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.724438+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.724640+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.724818+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.724985+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.726291+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.726512+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.726663+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.726832+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.726983+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.727116+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.727267+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.727422+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.727535+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.728387+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.729535+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.730676+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.730884+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.731264+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.731404+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.731583+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.732777+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.733561+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.733947+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.734338+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.735143+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.735671+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.736118+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.736353+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.736888+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.737172+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.737621+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.738033+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.738616+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.738829+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.739161+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.739383+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.739610+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.739793+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.740014+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.740188+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.740413+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.740602+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.740835+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.740990+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.741265+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.741507+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.741817+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.741996+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.742158+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.742328+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.742548+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.742695+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.742833+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.743036+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.743209+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.743375+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.743591+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.743754+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.743899+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.744090+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.744263+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.744409+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.744524+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.744732+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.744919+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.745114+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.745343+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.745580+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.745737+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.745939+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.746132+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.746317+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.746598+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.746833+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.747012+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.747208+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 1687552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.747431+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.747590+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.747776+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 1679360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.747953+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.748097+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.748233+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.748402+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.748568+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.748721+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.748963+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.749227+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.750091+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.750237+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.750697+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.751182+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.751564+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.751758+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.751953+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.752228+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.752390+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.752553+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.752854+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.753201+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.753533+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.753772+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.753963+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.754166+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.754381+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.754542+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.754726+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.754991+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.755142+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.755298+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.755518+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.755887+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.756061+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.756224+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.756384+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.756512+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.756618+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.756757+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.756914+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.757108+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.757694+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.757829+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.758054+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.758284+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.758608+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.758787+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.758979+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.759224+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.759421+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.759584+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.759747+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.759918+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.760113+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 1671168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.760324+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.760525+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.760671+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.760817+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.761015+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.761187+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.761383+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.761537+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.761745+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.761941+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 1662976 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.762142+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.762374+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.762553+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.762742+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.762933+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.763098+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.763281+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.763448+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.763583+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.763742+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.763919+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890233 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.764071+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.764285+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fca37000/0x0/0x4ffc00000, data 0x128014/0x1e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.764511+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 1654784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657400
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.764688+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 186.598068237s of 186.921478271s, submitted: 90
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 127 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 1613824 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.765887+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 896151 data_alloc: 218103808 data_used: 237568
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 1597440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fc5c0000/0x0/0x4ffc00000, data 0x59b775/0x65e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.766361+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 10878976 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 130 ms_handle_reset con 0x55e99f657400 session 0x55e9a24a7e00
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.766592+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 10887168 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a104e400
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.766799+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 18210816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 131 ms_handle_reset con 0x55e9a104e400 session 0x55e9a24ae3c0
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.767227+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.767441+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.767836+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.768100+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 18202624 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.768271+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.768547+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:17 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.768731+0000)
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:17 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:17 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046209 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:17 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 18194432 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:17 compute-0 ceph-osd[90784]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.570235252s of 11.802167892s, submitted: 32
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.768907+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b9000/0x0/0x4ffc00000, data 0x159eea7/0x1664000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.769064+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.810767+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.811147+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.811594+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.812056+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.812459+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.812816+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.813110+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.813388+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.813636+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.813837+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.814055+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.814551+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.814967+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.815794+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.816327+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.816619+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.816869+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.628356934s of 18.760848999s, submitted: 13
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.817219+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 10
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049183 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.817571+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 18186240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.817800+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a090a/0x1667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a24c1c00
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.818147+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.818461+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 18161664 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.818845+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050071 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.819173+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.819971+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b6000/0x0/0x4ffc00000, data 0x15a09a5/0x1668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 17104896 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 11
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.820190+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 17080320 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.820466+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 17063936 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b5000/0x0/0x4ffc00000, data 0x15a0a6f/0x1669000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.825894356s of 10.000619888s, submitted: 13
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.820650+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054317 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.820871+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.821167+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.821400+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb5b3000/0x0/0x4ffc00000, data 0x15a0c9e/0x166a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.821626+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 17055744 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.821787+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1059907 data_alloc: 218103808 data_used: 262144
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.822046+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.822308+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.822530+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a294e/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.822745+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 17039360 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.553850174s of 10.000466347s, submitted: 42
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.822944+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057663 data_alloc: 218103808 data_used: 262144
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 16998400 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.823097+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.823286+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.823578+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.823868+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 16990208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb5b1000/0x0/0x4ffc00000, data 0x15a2a18/0x166d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.824095+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1062723 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.824521+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 16973824 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.824987+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 16957440 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.825440+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4574/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.825717+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791356087s of 10.000161171s, submitted: 30
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.825855+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064529 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.826009+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a46d9/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.826199+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.826372+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 16949248 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.826606+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78340096 unmapped: 16908288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.826738+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063005 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a47a3/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.826903+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.827188+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.827429+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.827750+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78405632 unmapped: 16842752 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.954462051s of 10.000647545s, submitted: 8
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.827922+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066317 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 16834560 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.828127+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a486d/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.828293+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.828507+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.828754+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 16818176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.829018+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066141 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a489c/0x166f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.829391+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.829669+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.829899+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78438400 unmapped: 16809984 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fb5ae000/0x0/0x4ffc00000, data 0x15a4937/0x1670000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.830215+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.945301056s of 10.004324913s, submitted: 10
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.830409+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067733 data_alloc: 218103808 data_used: 270336
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.830572+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.830837+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.831043+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a8000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.831218+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.831408+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072619 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.831642+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.831903+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.832068+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.832297+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.752876282s of 10.019038200s, submitted: 31
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.832555+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fb5a9000/0x0/0x4ffc00000, data 0x15a6653/0x1675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.832773+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.832918+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.833112+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.833273+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a5000/0x0/0x4ffc00000, data 0x15a80b6/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.833592+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076617 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 16801792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.833825+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.834034+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.834133+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 17031168 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.834238+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 17022976 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a6000/0x0/0x4ffc00000, data 0x15a811b/0x1678000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.834403+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429555893s of 10.579680443s, submitted: 16
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075239 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.834609+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.834760+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.834874+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 17014784 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.835043+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.835257+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fb5a7000/0x0/0x4ffc00000, data 0x15a814a/0x1677000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080909 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 16982016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.835407+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fb5a3000/0x0/0x4ffc00000, data 0x15a9f79/0x167a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 16965632 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.835533+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 16941056 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.835711+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 16916480 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.835931+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 16900096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.836091+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.703340530s of 10.039009094s, submitted: 101
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1086341 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 78356480 unmapped: 16891904 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb5a0000/0x0/0x4ffc00000, data 0x15ad952/0x167e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.836244+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.836391+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.836583+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.836846+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 15826944 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fb59f000/0x0/0x4ffc00000, data 0x15adaee/0x167f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.837001+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089693 data_alloc: 218103808 data_used: 303104
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.837135+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fb59b000/0x0/0x4ffc00000, data 0x15af876/0x1682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.837255+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79454208 unmapped: 15794176 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.837411+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.837646+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 79470592 unmapped: 15777792 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.837838+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.695872307s of 10.031254768s, submitted: 129
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095739 data_alloc: 218103808 data_used: 311296
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.838611+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb596000/0x0/0x4ffc00000, data 0x15b32a5/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80543744 unmapped: 14704640 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.838807+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.839411+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.839646+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fb597000/0x0/0x4ffc00000, data 0x15b336f/0x1687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.839958+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094987 data_alloc: 218103808 data_used: 315392
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 14663680 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.840584+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.840874+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.841019+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb593000/0x0/0x4ffc00000, data 0x15b4e5e/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 13606912 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.841203+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 13574144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fb594000/0x0/0x4ffc00000, data 0x15b4f28/0x168a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.841339+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.292222023s of 10.259329796s, submitted: 47
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.841700+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.842008+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.842144+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.842292+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.842534+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102263 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.842674+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.842797+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.843000+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.843212+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.843422+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb591000/0x0/0x4ffc00000, data 0x15b69ab/0x168d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101383 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 13541376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.843801+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688630104s of 10.714872360s, submitted: 17
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.844037+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 13500416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.844255+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.844466+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.844667+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.844832+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.845010+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.845183+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 13484032 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.845387+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.845613+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6b10/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.845877+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 13475840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.846126+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.969901085s of 11.010137558s, submitted: 5
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.846355+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6bda/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.846520+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.846707+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104871 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 13467648 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.846917+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6ca2/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.847089+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b6d07/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.847237+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 13451264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.847442+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58f000/0x0/0x4ffc00000, data 0x15b6d05/0x168f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.847686+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104743 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.847887+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.848061+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6ca4/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.848324+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 13443072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.848534+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.209359169s of 12.344432831s, submitted: 13
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 13410304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.848774+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103151 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.848963+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.849148+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.849319+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6d6e/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 13402112 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.849530+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.849689+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103167 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.849859+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6dd3/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.850029+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.850196+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.850422+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896079063s of 10.000349998s, submitted: 7
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.850632+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102975 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 13385728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.850796+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.850960+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6e9d/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.851173+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.851438+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb590000/0x0/0x4ffc00000, data 0x15b6f02/0x168e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 13352960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.851588+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1106511 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.851750+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a24c1800
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 13344768 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.851949+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 12
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.852090+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58e000/0x0/0x4ffc00000, data 0x15b70ae/0x1690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 13287424 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.852283+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912652969s of 10.000061989s, submitted: 11
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.852860+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107429 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 13271040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.853085+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.853359+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.853554+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 13262848 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.853850+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 13238272 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7398/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.854456+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110633 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.854885+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.855093+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b748f/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.855278+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.855528+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 13213696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.774273872s of 10.000308037s, submitted: 20
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.855715+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.855875+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.856043+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.856357+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58d000/0x0/0x4ffc00000, data 0x15b7593/0x1691000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.856581+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.856913+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111743 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.857221+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 13205504 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.857590+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 13148160 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.857820+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.857966+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58b000/0x0/0x4ffc00000, data 0x15b76c0/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.892574310s of 10.000001907s, submitted: 8
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.858156+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 13131776 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1112629 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.858289+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.858610+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.858881+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 13066240 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.859106+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7788/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.859240+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.859433+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.859564+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.859730+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.859904+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 13058048 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.934629440s of 10.000991821s, submitted: 10
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.860092+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1113751 data_alloc: 218103808 data_used: 331776
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.860229+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.860389+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7727/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.860516+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.860676+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.860773+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114461 data_alloc: 218103808 data_used: 335872
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.860907+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.861043+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fb58c000/0x0/0x4ffc00000, data 0x15b7881/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.861201+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.861370+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.757849693s of 10.007835388s, submitted: 14
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.861557+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 13049856 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1119553 data_alloc: 218103808 data_used: 344064
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.861712+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 13025280 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.861893+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb588000/0x0/0x4ffc00000, data 0x15b9635/0x1695000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 12607488 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4fb577000/0x0/0x4ffc00000, data 0x15ca73b/0x16a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [1])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.862099+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 11862016 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.862275+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85991424 unmapped: 9256960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.862458+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 9355264 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126243 data_alloc: 218103808 data_used: 344064
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.862680+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 9289728 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f9f70000/0x0/0x4ffc00000, data 0x162369f/0x16fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.862845+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 9240576 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.862971+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.863141+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86179840 unmapped: 9068544 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.811680794s of 10.004056931s, submitted: 91
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 145 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.863314+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86564864 unmapped: 8683520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141655 data_alloc: 218103808 data_used: 348160
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.863436+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.863584+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.863715+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f9ef1000/0x0/0x4ffc00000, data 0x169e8bc/0x177c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 8740864 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.863932+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 86794240 unmapped: 8454144 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.864075+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88276992 unmapped: 6971392 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e9e000/0x0/0x4ffc00000, data 0x16f345c/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1151375 data_alloc: 218103808 data_used: 356352
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.864215+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88301568 unmapped: 6946816 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.864371+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.864515+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88039424 unmapped: 7208960 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e7a000/0x0/0x4ffc00000, data 0x1715edb/0x17f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.864666+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 7110656 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.935678482s of 10.000359535s, submitted: 115
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.864821+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88498176 unmapped: 6750208 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.864946+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153195 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9e2a000/0x0/0x4ffc00000, data 0x176630e/0x1844000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.865072+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88629248 unmapped: 6619136 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.865195+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.865313+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88612864 unmapped: 6635520 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.865530+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 88760320 unmapped: 6488064 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.865716+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158005 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 6242304 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.865868+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9df1000/0x0/0x4ffc00000, data 0x179f4a5/0x187d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90120192 unmapped: 5128192 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dd9000/0x0/0x4ffc00000, data 0x17b7751/0x1895000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.866027+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 4988928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.866172+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9dae000/0x0/0x4ffc00000, data 0x17e1ddf/0x18c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89899008 unmapped: 5349376 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.467321396s of 10.002651215s, submitted: 62
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.866309+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9da1000/0x0/0x4ffc00000, data 0x17f03b4/0x18cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.866434+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156747 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 89948160 unmapped: 5300224 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.866554+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.866710+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.866913+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90161152 unmapped: 5087232 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.867076+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 90021888 unmapped: 5226496 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a24c1800 session 0x55e99f683a40
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.867182+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172771 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92217344 unmapped: 3031040 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9d20000/0x0/0x4ffc00000, data 0x186df18/0x194d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.867284+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 13
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92332032 unmapped: 2916352 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.867424+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92962816 unmapped: 2285568 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.867591+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93036544 unmapped: 2211840 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.506773949s of 10.000545502s, submitted: 270
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cee000/0x0/0x4ffc00000, data 0x18a05d4/0x197f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.867706+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 93298688 unmapped: 1949696 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0x18d34ae/0x19b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.867850+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167737 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.868018+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 2564096 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.868187+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92676096 unmapped: 2572288 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.868397+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 2326528 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.868550+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94035968 unmapped: 1212416 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c66000/0x0/0x4ffc00000, data 0x192a11c/0x1a08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.868702+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1174241 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 1155072 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.868875+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 892928 heap: 95248384 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.869033+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9c1e000/0x0/0x4ffc00000, data 0x1971777/0x1a50000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.869191+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94109696 unmapped: 2187264 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.250415802s of 10.249304771s, submitted: 52
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.869341+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.869512+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1184509 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94355456 unmapped: 1941504 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.869705+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94494720 unmapped: 1802240 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.869875+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94756864 unmapped: 1540096 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.870087+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95019008 unmapped: 1277952 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.870234+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94363648 unmapped: 1933312 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf026/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.870406+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181477 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.870609+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.870816+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.871051+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf263/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.871302+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.871580+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186381 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.871805+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.872073+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.373756409s of 13.546041489s, submitted: 28
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.872297+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3bd/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.872557+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.872704+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187283 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.872894+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.873033+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf379/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.873214+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94429184 unmapped: 1867776 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.873372+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.873631+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186769 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.873802+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94445568 unmapped: 1851392 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.873984+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.874242+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.047278404s of 11.189574242s, submitted: 28
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.874405+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf3b5/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.874556+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187191 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.874725+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.874876+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf47c/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.875036+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.875162+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.875329+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191453 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.875534+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.875708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.875873+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57b/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.809183121s of 10.000802040s, submitted: 15
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.876036+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.876214+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191277 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.876362+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.876546+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf51a/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.877132+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 1826816 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.877323+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.877762+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190587 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.878457+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.878942+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.879320+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf518/0x1aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.646739006s of 10.003696442s, submitted: 8
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.879672+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcc000/0x0/0x4ffc00000, data 0x19bf57d/0x1aa2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.879939+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189925 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.880086+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 1835008 heap: 96296960 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.880299+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.009005+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 9133 writes, 36K keys, 9133 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 9133 writes, 2084 syncs, 4.38 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2182 writes, 7209 keys, 2182 commit groups, 1.0 writes per commit group, ingest: 7.59 MB, 0.01 MB/s
                                           Interval WAL: 2182 writes, 839 syncs, 2.60 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bce000/0x0/0x4ffc00000, data 0x19bf54a/0x1aa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.009344+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94461952 unmapped: 2883584 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.009654+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190955 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.009983+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.010301+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.010638+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.936868668s of 10.001495361s, submitted: 15
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.010805+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcd000/0x0/0x4ffc00000, data 0x19bf677/0x1a9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.010994+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188083 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99edc4800 session 0x55e99eaa2f00
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f657400
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.011163+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e9a038a000 session 0x55e99f863a40
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f718000
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 ms_handle_reset con 0x55e99f657800 session 0x55e9a247c000
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e9a038a000
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.011323+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.011543+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.011708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.011882+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188387 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf795/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.012035+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.012184+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.012361+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.810064316s of 10.060415268s, submitted: 16
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.012547+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94470144 unmapped: 2875392 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.012674+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188195 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.012793+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bf740/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.012978+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.013102+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.013246+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bf73e/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.013377+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188243 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.013673+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.013847+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.014172+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.816052437s of 10.073899269s, submitted: 6
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.014397+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf6dc/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.014599+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187393 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.014778+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.014955+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.015069+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.015349+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.015615+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.015841+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.015990+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.016281+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.016567+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.016804+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.017013+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.017200+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.017378+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.017585+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.017731+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186703 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.017942+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.018074+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.018263+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf70b/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.514488220s of 19.649364471s, submitted: 4
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.018449+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.018702+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.018906+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.019119+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.019292+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bf7a6/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.019551+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.019746+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188279 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.019908+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.020094+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.020387+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.148883820s of 10.212854385s, submitted: 7
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd3000/0x0/0x4ffc00000, data 0x19bf904/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.020601+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.020779+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189373 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.021006+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94478336 unmapped: 2867200 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.021249+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.021441+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfb03/0x1a9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.021642+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.021833+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191691 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.021999+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.022201+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2859008 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.022444+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847572327s of 10.020855904s, submitted: 15
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.022665+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd2000/0x0/0x4ffc00000, data 0x19bfb98/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.022819+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191001 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.023032+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.023319+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1982464 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.023579+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 1974272 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.023785+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.023943+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193495 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.024361+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfdcf/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95395840 unmapped: 1949696 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.024524+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.024759+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfd88/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.024967+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.765477180s of 10.907700539s, submitted: 20
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x19bfe8a/0x1a9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.025156+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194267 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.025368+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.025571+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.025762+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bcf000/0x0/0x4ffc00000, data 0x19bfeff/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.025993+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.026190+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193819 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.026418+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19bfeba/0x1a9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.026590+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.026779+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 1941504 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.026986+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.027395+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195779 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95412224 unmapped: 1933312 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.027569+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628976822s of 11.735780716s, submitted: 18
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95436800 unmapped: 1908736 heap: 97345536 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bd1000/0x0/0x4ffc00000, data 0x19c0084/0x1a9d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.027717+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.027858+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.028014+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9bc7000/0x0/0x4ffc00000, data 0x19ca4b5/0x1aa7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 2654208 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.028184+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1208253 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.028318+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 2646016 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.028437+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b8c000/0x0/0x4ffc00000, data 0x1a04374/0x1ae2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95690752 unmapped: 2703360 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.028648+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.028903+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 2908160 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b52000/0x0/0x4ffc00000, data 0x1a3dfc9/0x1b1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.029222+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211041 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 2899968 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.029508+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896175385s of 10.457120895s, submitted: 145
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 2793472 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.029711+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 95666176 unmapped: 2727936 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.029921+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9b15000/0x0/0x4ffc00000, data 0x1a78e5c/0x1b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96591872 unmapped: 1802240 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.030127+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96952320 unmapped: 1441792 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.030509+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211969 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.030723+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 1409024 heap: 98394112 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.031067+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.031445+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9ac4000/0x0/0x4ffc00000, data 0x1acc2ef/0x1baa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.031843+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 2867200 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.032045+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213307 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96632832 unmapped: 2809856 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9aa5000/0x0/0x4ffc00000, data 0x1aec27d/0x1bc9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.032234+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96870400 unmapped: 2572288 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.032436+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.398673058s of 10.845589638s, submitted: 76
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96878592 unmapped: 2564096 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.032621+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a8b000/0x0/0x4ffc00000, data 0x1b03811/0x1be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.032835+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 2269184 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.033242+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222355 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97247232 unmapped: 2195456 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.033572+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b404d5/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97402880 unmapped: 2039808 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.033762+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.034079+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96845824 unmapped: 2596864 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.034261+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f9a51000/0x0/0x4ffc00000, data 0x1b409b2/0x1c1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.034427+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223335 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96985088 unmapped: 2457600 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.034641+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.034826+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a3e000/0x0/0x4ffc00000, data 0x1b51d6c/0x1c2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 96993280 unmapped: 2449408 heap: 99442688 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.035059+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.777762413s of 10.991305351s, submitted: 54
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.035266+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98426880 unmapped: 2064384 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.035416+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1227443 data_alloc: 218103808 data_used: 368640
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 1859584 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.035555+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.035774+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f9a0e000/0x0/0x4ffc00000, data 0x1b8303c/0x1c60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 1851392 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.035953+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.036122+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 2572288 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.036336+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235585 data_alloc: 218103808 data_used: 380928
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98017280 unmapped: 2473984 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.036548+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f99da000/0x0/0x4ffc00000, data 0x1bb2c5a/0x1c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98148352 unmapped: 2342912 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.036731+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: handle_auth_request added challenge on 0x55e99f719c00
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 2088960 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.036937+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 14
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.176039696s of 10.044042587s, submitted: 58
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 1974272 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.037131+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9982000/0x0/0x4ffc00000, data 0x1c085b8/0x1cec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98074624 unmapped: 2416640 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.037265+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9983000/0x0/0x4ffc00000, data 0x1c08a7b/0x1ceb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241069 data_alloc: 218103808 data_used: 380928
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 1204224 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.037557+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99196928 unmapped: 1294336 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.037735+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.037958+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9960000/0x0/0x4ffc00000, data 0x1c2c683/0x1d0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99352576 unmapped: 1138688 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.038151+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99622912 unmapped: 868352 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.038324+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246425 data_alloc: 218103808 data_used: 380928
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.038538+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f9934000/0x0/0x4ffc00000, data 0x1c58293/0x1d3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99794944 unmapped: 696320 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.038735+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99221504 unmapped: 1269760 heap: 100491264 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.038892+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.755167961s of 10.000792503s, submitted: 55
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99409920 unmapped: 2129920 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.039091+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99418112 unmapped: 2121728 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.039257+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f98eb000/0x0/0x4ffc00000, data 0x1ca0bfe/0x1d83000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258021 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99524608 unmapped: 2015232 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.039413+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 98787328 unmapped: 2752512 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.039582+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100073472 unmapped: 1466368 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.039798+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100204544 unmapped: 1335296 heap: 101539840 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.039929+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9882000/0x0/0x4ffc00000, data 0x1d094ff/0x1deb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100433920 unmapped: 2154496 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.040070+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255919 data_alloc: 218103808 data_used: 389120
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9880000/0x0/0x4ffc00000, data 0x1d0d156/0x1ded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.040276+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100540416 unmapped: 2048000 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.040463+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100630528 unmapped: 1957888 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.040648+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.610513687s of 10.004332542s, submitted: 107
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99860480 unmapped: 2727936 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.040803+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 99909632 unmapped: 2678784 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.040958+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263663 data_alloc: 218103808 data_used: 389120
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.041127+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9833000/0x0/0x4ffc00000, data 0x1d594d1/0x1e3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.041305+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9831000/0x0/0x4ffc00000, data 0x1d5cab0/0x1e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100048896 unmapped: 2539520 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.041535+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.041677+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.041900+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268991 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100130816 unmapped: 2457600 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.042031+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100368384 unmapped: 2220032 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.042171+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 100548608 unmapped: 2039808 heap: 102588416 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.042305+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.842511177s of 10.004651070s, submitted: 45
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97e8000/0x0/0x4ffc00000, data 0x1da2e8d/0x1e86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.042434+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.042580+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f97ca000/0x0/0x4ffc00000, data 0x1dc2d76/0x1ea4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1271023 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.042708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101736448 unmapped: 1900544 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.043176+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101818368 unmapped: 1818624 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.043380+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.043598+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101883904 unmapped: 1753088 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.043762+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 1589248 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9787000/0x0/0x4ffc00000, data 0x1e04161/0x1ee7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1277019 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.043876+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.044035+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 1531904 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.044258+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 1523712 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.390405655s of 10.003252983s, submitted: 43
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f9775000/0x0/0x4ffc00000, data 0x1e15f98/0x1ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.044410+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 1638400 heap: 103636992 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.044572+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1289409 data_alloc: 218103808 data_used: 405504
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.044749+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102113280 unmapped: 2572288 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.044898+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102596608 unmapped: 2088960 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f970c000/0x0/0x4ffc00000, data 0x1e7e105/0x1f62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.045090+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 102604800 unmapped: 2080768 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.045292+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103579648 unmapped: 1105920 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.045428+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [155,155], i have 153, src has [1,155]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 153 handle_osd_map epochs [154,155], i have 153, src has [1,155]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103759872 unmapped: 925696 heap: 104685568 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292805 data_alloc: 218103808 data_used: 413696
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.045572+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103989248 unmapped: 1744896 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.045725+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.045876+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 103997440 unmapped: 1736704 heap: 105734144 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.912647247s of 10.003127098s, submitted: 105
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f96be000/0x0/0x4ffc00000, data 0x1ec9226/0x1faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.046039+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104103936 unmapped: 2678784 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.046179+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104136704 unmapped: 2646016 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300925 data_alloc: 218103808 data_used: 413696
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f925e000/0x0/0x4ffc00000, data 0x1f18ec5/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.047596+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 155 heartbeat osd_stat(store_statfs(0x4f925e000/0x0/0x4ffc00000, data 0x1f18ec5/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.048065+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.049264+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104382464 unmapped: 2400256 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.049544+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 155 handle_osd_map epochs [156,157], i have 155, src has [1,157]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.050658+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310587 data_alloc: 218103808 data_used: 421888
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.051577+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 2113536 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.052346+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 1933312 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f9235000/0x0/0x4ffc00000, data 0x1f3f6cc/0x2028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.053046+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 104849408 unmapped: 1933312 heap: 106782720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739536285s of 10.003342628s, submitted: 74
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.053689+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105365504 unmapped: 3514368 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.053847+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f91d4000/0x0/0x4ffc00000, data 0x1f9f56a/0x208a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315139 data_alloc: 218103808 data_used: 421888
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.054385+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.054578+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.054855+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 3268608 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.055253+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105725952 unmapped: 3153920 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f91c2000/0x0/0x4ffc00000, data 0x1faf150/0x209b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.055428+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105644032 unmapped: 3235840 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320365 data_alloc: 218103808 data_used: 438272
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.055632+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105701376 unmapped: 3178496 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f91c2000/0x0/0x4ffc00000, data 0x1faf150/0x209b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.055977+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105717760 unmapped: 3162112 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.056277+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 105742336 unmapped: 3137536 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.824184418s of 10.002670288s, submitted: 51
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9168000/0x0/0x4ffc00000, data 0x20087cd/0x20f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.056448+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.056754+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332749 data_alloc: 218103808 data_used: 438272
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.057065+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106176512 unmapped: 2703360 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.057308+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 2580480 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.057532+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x203cc51/0x212a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 2523136 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.057696+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 2457600 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.057887+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 2318336 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331605 data_alloc: 218103808 data_used: 438272
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.058091+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106561536 unmapped: 2318336 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.058246+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 1269760 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.058532+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107773952 unmapped: 1105920 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f90ef000/0x0/0x4ffc00000, data 0x20813ec/0x216f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.058744+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.666082382s of 10.835702896s, submitted: 36
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 1187840 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.058890+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107724800 unmapped: 1155072 heap: 108879872 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.059111+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1343413 data_alloc: 218103808 data_used: 438272
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 2301952 heap: 109928448 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.059256+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107552768 unmapped: 2375680 heap: 109928448 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.059436+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 3366912 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.059690+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 heartbeat osd_stat(store_statfs(0x4f9060000/0x0/0x4ffc00000, data 0x210f39f/0x21fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106618880 unmapped: 4358144 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.059905+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 4308992 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.060073+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349605 data_alloc: 218103808 data_used: 446464
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106668032 unmapped: 4308992 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.060292+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106807296 unmapped: 4169728 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.060430+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 4005888 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f9005000/0x0/0x4ffc00000, data 0x2167a88/0x2259000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.060566+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108347392 unmapped: 2629632 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.060728+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.010040283s of 10.882234573s, submitted: 96
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 2490368 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f8fe2000/0x0/0x4ffc00000, data 0x2189fb7/0x227c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.060871+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354141 data_alloc: 218103808 data_used: 446464
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 2482176 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.061156+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108625920 unmapped: 2351104 heap: 110977024 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.061367+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 3301376 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.061578+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 3293184 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.061773+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 3178496 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.061985+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1371181 data_alloc: 218103808 data_used: 454656
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 3514368 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 161 heartbeat osd_stat(store_statfs(0x4f8f76000/0x0/0x4ffc00000, data 0x21f457f/0x22e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.062123+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108544000 unmapped: 3481600 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.062242+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108814336 unmapped: 3211264 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.062389+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108683264 unmapped: 3342336 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.062550+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f8f0e000/0x0/0x4ffc00000, data 0x225d974/0x234f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108732416 unmapped: 3293184 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.832097054s of 10.369318008s, submitted: 105
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.062691+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1379751 data_alloc: 218103808 data_used: 462848
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108806144 unmapped: 3219456 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.062918+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 2965504 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f8ef5000/0x0/0x4ffc00000, data 0x2278853/0x2369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.063120+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 109060096 unmapped: 2965504 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.063269+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108716032 unmapped: 3309568 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.063422+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3096576 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f8eb0000/0x0/0x4ffc00000, data 0x22b874a/0x23ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.063679+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386655 data_alloc: 218103808 data_used: 471040
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108929024 unmapped: 3096576 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.063824+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 3088384 heap: 112025600 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.063970+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 2826240 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.064144+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 2801664 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f8e60000/0x0/0x4ffc00000, data 0x230aa9b/0x23fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.064305+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110272512 unmapped: 2801664 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.064513+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390587 data_alloc: 218103808 data_used: 475136
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 2596864 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.064692+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110477312 unmapped: 2596864 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.064863+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.478283882s of 12.892833710s, submitted: 111
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110551040 unmapped: 2523136 heap: 113074176 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.065040+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 3448832 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.065184+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110673920 unmapped: 3448832 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f7c8c000/0x0/0x4ffc00000, data 0x233d338/0x2431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.065322+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396817 data_alloc: 218103808 data_used: 483328
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 3432448 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.065461+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 3383296 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.065640+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110542848 unmapped: 3579904 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.065974+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f7c61000/0x0/0x4ffc00000, data 0x23687ca/0x245d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110559232 unmapped: 3563520 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.066135+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 3555328 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.066289+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407455 data_alloc: 218103808 data_used: 499712
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 3334144 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.066417+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7c1f000/0x0/0x4ffc00000, data 0x23a87c8/0x249e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 111878144 unmapped: 2244608 heap: 114122752 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.066586+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.526010513s of 10.000200272s, submitted: 99
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113090560 unmapped: 2080768 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.066709+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 1966080 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.066876+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113205248 unmapped: 1966080 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.067015+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412423 data_alloc: 218103808 data_used: 512000
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a43000/0x0/0x4ffc00000, data 0x23e4419/0x24da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 1777664 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.067154+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113393664 unmapped: 1777664 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.067304+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a43000/0x0/0x4ffc00000, data 0x23e4419/0x24da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 1712128 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.067587+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a29000/0x0/0x4ffc00000, data 0x23ff43d/0x24f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 1630208 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.067753+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 1630208 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.067866+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1415755 data_alloc: 218103808 data_used: 512000
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 1613824 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.068060+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 2539520 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.068261+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.746135712s of 10.000371933s, submitted: 78
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 2072576 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f6a02000/0x0/0x4ffc00000, data 0x2425255/0x251c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.068403+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 2072576 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _renew_subs
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 169 ms_handle_reset con 0x55e99f719c00 session 0x55e9a26281e0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.068547+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 1744896 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.068695+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 15
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1419647 data_alloc: 218103808 data_used: 520192
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 1728512 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.068934+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113442816 unmapped: 1728512 heap: 115171328 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.069095+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.069235+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6987000/0x0/0x4ffc00000, data 0x249fe4d/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.069365+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113590272 unmapped: 2629632 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.069566+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423983 data_alloc: 218103808 data_used: 520192
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.069735+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.069956+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 2465792 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.070148+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f6987000/0x0/0x4ffc00000, data 0x249fe4d/0x2597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.937213898s of 11.105960846s, submitted: 234
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.070351+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.070535+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424925 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.070696+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.070878+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6983000/0x0/0x4ffc00000, data 0x24a18b0/0x259a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.071045+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.071198+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.071394+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.071508+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.071730+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.071881+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.072036+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.072199+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.072425+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.072654+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.072860+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.073053+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.073296+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.073553+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.073799+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.073954+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.074084+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.074242+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.074422+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.074592+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.074751+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.074909+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 2449408 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.075059+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.075188+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.075363+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.075628+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.075818+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.075988+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.076224+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.076394+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.076637+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.077619+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.078786+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.079048+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.079249+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.079959+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.080582+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.080993+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 2613248 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.081504+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.081929+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.082317+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.082500+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.082752+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426397 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6977000/0x0/0x4ffc00000, data 0x24ad617/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.082940+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.083247+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.083539+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113614848 unmapped: 2605056 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.246944427s of 50.285312653s, submitted: 15
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 ms_handle_reset con 0x55e9a24c1c00 session 0x55e9a24aeb40
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.083715+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 2064384 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.083964+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Got map version 16
Nov 22 06:10:18 compute-0 ceph-osd[90784]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.084131+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.084370+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.084597+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.084789+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.084958+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.085139+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.085375+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.085540+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.085693+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.085851+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.086013+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.086168+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.086356+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.086573+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.086762+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.086975+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.087166+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.087334+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.087534+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.087749+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.087925+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.088078+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.088253+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.088433+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.088545+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.088708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.088949+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.089085+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.089202+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.089327+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.089443+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:17.089594+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 2048000 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:18.089788+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 2023424 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:19.089927+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114040832 unmapped: 2179072 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:20.090068+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113926144 unmapped: 2293760 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:21.090191+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 2269184 heap: 116219904 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:22.090340+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf dump' '{prefix=perf dump}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf schema' '{prefix=perf schema}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:23.090528+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:24.090637+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:25.090756+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:26.090913+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:27.091028+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:28.091151+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:29.091304+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:30.091450+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:31.091567+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:32.091765+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:33.091892+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:34.092020+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:35.092146+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:36.092285+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:37.092460+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:38.092635+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:39.092848+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 13467648 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:40.093011+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:41.093130+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:42.093264+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:43.093457+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:44.093631+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:45.093774+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:46.093881+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:47.093994+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:48.094134+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:49.094261+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:50.094551+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:51.094688+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:52.094903+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:53.095084+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:54.095230+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:55.095411+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 13459456 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:56.095556+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:57.095847+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:58.096045+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:59.096238+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:00.096458+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:01.096671+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:02.096902+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:03.097073+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:04.097265+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:05.097438+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:06.097605+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:07.097778+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:08.097986+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:09.098164+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:10.098298+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:11.098468+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:12.098672+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:13.098826+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:14.098980+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:15.099196+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:16.099333+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:17.099535+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:18.099685+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:19.099844+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 13451264 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:20.100056+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 13443072 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:21.100230+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 13443072 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:22.100456+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:23.100707+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:24.100882+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:25.101162+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:26.101302+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:27.101500+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:28.101715+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:29.101914+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:30.102070+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:31.102251+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:32.102574+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:33.102725+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:34.102993+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:35.103347+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 13434880 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:36.103581+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:37.103753+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:38.103988+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:39.104149+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:40.104329+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:41.104524+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:42.104733+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:43.104879+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:44.105022+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:45.105185+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:46.105384+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:47.105621+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:48.105980+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:49.106171+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:50.106331+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:51.106533+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:52.106734+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:53.106951+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:54.107113+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:55.107272+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:56.107437+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:57.107608+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:58.107813+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:59.107990+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:00.108184+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:01.108377+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 13426688 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:02.108579+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:03.108702+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:04.109169+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:05.109355+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:06.109574+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:07.109766+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:08.109917+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:09.110084+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:10.110326+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:11.110506+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:12.110702+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:13.110924+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:14.111126+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:15.111385+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 13418496 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:16.111693+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:17.112221+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:18.112468+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:19.112929+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:20.113103+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:21.113307+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:22.113560+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:23.113742+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:24.114041+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:25.114244+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:26.114442+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:27.114694+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:28.114868+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:29.115007+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:30.115284+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:31.115574+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:32.115821+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:33.115994+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 13410304 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:34.116192+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:35.116353+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:36.116513+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:37.116681+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:38.116880+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:39.117017+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:40.117160+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:41.117296+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:42.117451+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:43.117591+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:44.117708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:45.117887+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:46.118044+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:47.118218+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:48.118361+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:49.118533+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:50.118666+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:51.118804+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:52.119025+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:53.119195+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:54.119342+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:55.119523+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:56.119703+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:57.119892+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:58.120093+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:59.120321+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:00.120467+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:01.120678+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:02.121047+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:03.121209+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:04.121394+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:05.121557+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 13393920 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:06.121732+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:07.121902+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:08.122061+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:09.122281+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:10.122548+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:11.122764+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:12.123020+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:13.123252+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:14.123450+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:15.123605+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:16.123752+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:17.123908+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:18.124082+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:19.124251+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:20.124412+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:21.124563+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:22.125350+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:23.125564+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:24.125708+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 13385728 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:25.125862+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 13377536 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:26.126066+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 13377536 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:27.126194+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 13377536 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:28.126378+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 13377536 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:29.126543+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 13377536 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:30.126714+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:31.126884+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:32.127050+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:33.127213+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:34.127432+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:35.127640+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:36.127830+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:37.128009+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:38.128158+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:39.128289+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:40.128461+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:41.128680+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:42.128918+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:43.129101+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:44.129260+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:45.129454+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 13369344 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:46.129863+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:47.130044+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:48.130206+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:49.130349+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:50.131719+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:51.131917+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:52.132073+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:53.132222+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:54.132772+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:55.132925+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:56.133103+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:57.133284+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:58.133464+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:59.133795+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113901568 unmapped: 13361152 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:00.133975+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:01.134241+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:02.134594+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:03.134754+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:04.135033+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:05.135209+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:06.135424+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113909760 unmapped: 13352960 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:07.135548+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:08.135722+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:09.135894+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:10.136058+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:11.136196+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:12.136556+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:13.137002+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:14.137235+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:15.137515+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:16.137797+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 13344768 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:17.138113+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3213 syncs, 3.78 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3014 writes, 9902 keys, 3014 commit groups, 1.0 writes per commit group, ingest: 13.71 MB, 0.02 MB/s
                                           Interval WAL: 3014 writes, 1129 syncs, 2.67 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113934336 unmapped: 13328384 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:18.138408+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:19.138592+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:20.139218+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:21.139537+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:22.139832+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:23.140080+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:24.140328+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:25.140604+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:26.140838+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:27.141085+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:28.141271+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:29.141513+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:30.141732+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:31.141905+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:32.142092+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:33.142310+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:34.142515+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:35.142685+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:36.142841+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:37.143028+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:38.143197+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113942528 unmapped: 13320192 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:39.143383+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:40.143540+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:41.143694+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:42.143859+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:43.144014+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:44.144194+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:45.144349+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:46.144563+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:47.144770+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:48.144959+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:49.145135+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:50.145278+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:51.145448+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:52.145685+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:53.145825+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:54.146001+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113950720 unmapped: 13312000 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:55.146165+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:56.146281+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:57.146447+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:58.146544+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:59.146651+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:00.146790+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:01.146973+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:02.147159+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:03.147339+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:04.147539+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:05.147686+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:06.147825+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:07.147976+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:08.148121+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:09.148308+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:10.148609+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 13303808 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:11.148757+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:12.149306+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:13.149439+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:14.149561+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:15.149671+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:16.149820+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:17.150044+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:18.150301+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:19.150643+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:20.150848+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:21.151043+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:22.151223+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:23.151426+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:24.151878+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:25.152372+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:26.153049+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:27.153605+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113967104 unmapped: 13295616 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:28.154089+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:29.154509+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:30.154783+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:31.154993+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:32.155413+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:33.155630+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:34.155786+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:35.155952+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:36.156101+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:37.156410+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:38.156662+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:39.157004+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:40.157308+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:41.157504+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:42.157685+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1425037 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:43.158239+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113975296 unmapped: 13287424 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:44.158543+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113983488 unmapped: 13279232 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:45.158710+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 361.353485107s of 361.428039551s, submitted: 204
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114024448 unmapped: 13238272 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:46.158882+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114081792 unmapped: 13180928 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:47.159045+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 13148160 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:48.159195+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 13148160 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:49.159341+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 13148160 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:50.159502+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 13148160 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:51.159628+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114114560 unmapped: 13148160 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:52.159801+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:53.159920+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:54.160091+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:55.160240+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:56.160580+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:57.161169+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:58.161502+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:59.161752+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:00.161945+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:01.162162+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:02.162515+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:03.162873+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:04.163015+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:05.163221+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:06.163543+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:07.163847+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:08.163961+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:09.164238+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:10.164378+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:11.164643+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:12.165079+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 13131776 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:13.165336+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:14.165493+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:15.165666+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:16.165815+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:17.165988+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:18.166153+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:19.166400+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114139136 unmapped: 13123584 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:20.166712+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:21.166853+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:22.167122+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:23.167286+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:24.167436+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:25.167589+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:26.167861+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:27.168056+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:28.168216+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:29.168354+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:30.168507+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:31.168697+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:32.168848+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:33.168978+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:34.169113+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:35.169264+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:36.169443+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:37.169569+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:38.169747+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:39.169921+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114147328 unmapped: 13115392 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:40.170084+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:41.170227+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:42.170406+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:43.170565+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:44.170718+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:45.170885+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:46.171066+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:47.171216+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:48.171392+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:49.171549+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:50.171657+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:51.171815+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:52.172051+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:53.172221+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:54.172594+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114155520 unmapped: 13107200 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:55.172721+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:56.172831+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:57.172976+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:58.173129+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:59.173320+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:00.174309+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:01.175168+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:02.177040+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:03.177346+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:04.177819+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:05.178662+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:06.179257+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:07.179666+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:08.180365+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:09.181371+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:10.181694+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113680384 unmapped: 13582336 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:11.181993+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:12.182318+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:13.182508+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:14.182706+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:15.183182+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:16.183356+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:17.184039+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:18.184287+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:19.184546+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:20.184814+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:21.184990+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:22.185169+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:23.185251+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:24.185536+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:25.185710+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:26.185854+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113688576 unmapped: 13574144 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:27.186089+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:28.186428+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:29.186657+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:30.186857+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:31.187003+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:32.187223+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:33.187390+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:34.187595+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:35.187781+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:36.187923+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:37.188130+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:38.188327+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:39.188609+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:40.188813+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:41.189018+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:42.189227+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:43.189323+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:44.189501+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:45.189665+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:46.189797+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:47.189926+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:48.190066+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113696768 unmapped: 13565952 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:49.190230+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 13557760 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:50.190421+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 13557760 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:51.190623+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:52.190833+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:53.190988+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:54.191171+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:55.191341+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:56.191491+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:57.191633+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:58.191797+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:59.191932+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:00.192068+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:01.192661+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:02.192860+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 13549568 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:03.193013+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:04.193177+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:05.193348+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:06.193810+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:07.194119+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:08.194516+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:09.194737+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:10.195010+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:11.195386+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:12.195769+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:13.195995+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:14.196150+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:15.196360+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113721344 unmapped: 13541376 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:16.196737+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:17.196913+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:18.197067+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:19.197258+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:20.197447+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:21.197633+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:22.197801+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:23.198000+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:24.198178+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:25.198361+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:26.198536+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:27.198740+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:28.198906+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:29.199082+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:30.199284+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:31.199448+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:32.199627+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:33.199765+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113729536 unmapped: 13533184 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:34.199915+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:35.200052+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:36.200169+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:37.200291+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:38.200418+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:39.200520+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:40.200626+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:41.200745+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:42.200934+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:18 compute-0 ceph-osd[90784]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:18 compute-0 ceph-osd[90784]: bluestore.MempoolThread(0x55e99dda5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424813 data_alloc: 218103808 data_used: 528384
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:43.202578+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:44.202749+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 13524992 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:45.202870+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 13139968 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:46.203029+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 114319360 unmapped: 12943360 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: osd.1 170 heartbeat osd_stat(store_statfs(0x4f6978000/0x0/0x4ffc00000, data 0x24ad82a/0x25a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: tick
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_tickets
Nov 22 06:10:18 compute-0 ceph-osd[90784]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:47.203166+0000)
Nov 22 06:10:18 compute-0 ceph-osd[90784]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 13402112 heap: 127262720 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:18 compute-0 ceph-osd[90784]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 22 06:10:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439633119' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 06:10:18 compute-0 rsyslogd[1005]: imjournal from <np0005531754:ceph-osd>: begin to drop messages due to rate-limiting
Nov 22 06:10:18 compute-0 podman[292393]: 2025-11-22 06:10:18.293688409 +0000 UTC m=+0.148289872 container health_status 0d2750781726c1b0d3952db306e2c31528722de99a21da56e2b3e294a91c3736 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:10:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 22 06:10:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612328363' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 06:10:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759300882' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/579627788' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3199070274' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/439633119' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3612328363' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1759300882' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 22 06:10:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077221358' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 06:10:18 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 22 06:10:18 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146929474' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14933 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14935 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14937 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14939 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2077221358' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3146929474' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mon[75840]: from='client.14933 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:19 compute-0 ceph-mon[75840]: pgmap v1527: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14945 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14947 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 22 06:10:20 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572271206' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.14935 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.14937 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.14939 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.14941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.14945 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2572271206' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 06:10:20 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14951 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 22 06:10:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669141516' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 06:10:21 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:21 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14955 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:21 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 06:10:21 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1240833262' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 22 06:10:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498815368' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: from='client.14947 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: from='client.14951 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2669141516' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: pgmap v1528: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:22 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1240833262' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 06:10:22 compute-0 nova_compute[255660]: 2025-11-22 06:10:22.129 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:10:22 compute-0 nova_compute[255660]: 2025-11-22 06:10:22.129 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 06:10:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 06:10:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 06:10:22 compute-0 nova_compute[255660]: 2025-11-22 06:10:22.268 255664 DEBUG nova.compute.manager [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:24.152645+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67936256 unmapped: 163840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:25.152911+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:26.153087+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:27.153281+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:28.153497+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:29.153722+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67952640 unmapped: 147456 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:30.153962+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:31.154113+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67960832 unmapped: 139264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:32.154282+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:33.154424+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:34.154590+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67969024 unmapped: 131072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:35.154741+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:36.154907+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:37.155077+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:38.155393+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67977216 unmapped: 122880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:39.155515+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67985408 unmapped: 114688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:40.155754+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:41.156012+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 67993600 unmapped: 106496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:42.156347+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:43.156587+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68001792 unmapped: 98304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:44.156782+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 325.705718994s of 325.754638672s, submitted: 12
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:45.156970+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:46.157204+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:47.157445+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:48.157760+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:49.158051+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68648960 unmapped: 1548288 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:50.158354+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:51.158801+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:52.159106+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68665344 unmapped: 1531904 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:53.159894+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:54.160315+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:55.160599+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68673536 unmapped: 1523712 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:56.161025+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:57.161430+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 1515520 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:58.161804+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:36:59.162135+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:00.162528+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68689920 unmapped: 1507328 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:01.162920+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:02.163199+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68698112 unmapped: 1499136 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:03.163624+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:04.163999+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68706304 unmapped: 1490944 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:05.164240+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:06.164521+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68714496 unmapped: 1482752 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:07.164731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:08.164868+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:09.165048+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68722688 unmapped: 1474560 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:10.165236+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:11.165511+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:12.165658+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:13.165823+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:14.166022+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:15.166183+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:16.166442+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:17.166616+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:18.166851+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:19.167005+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:20.167208+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:21.167386+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:22.167557+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:23.167734+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:24.167886+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:25.168053+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:26.168259+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:27.168435+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:28.168622+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:29.170061+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:30.170694+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:31.170953+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:32.171176+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:33.171344+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:34.171566+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:35.171775+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:36.172014+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:37.172165+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:38.172292+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:39.172518+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:40.172655+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:41.172802+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:42.172930+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:43.173069+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:44.173258+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:45.173401+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:46.173587+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:47.173918+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:48.174067+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:49.174228+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:50.174404+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:51.174542+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:52.174681+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:53.174867+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:54.174998+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:55.175127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:56.175285+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:57.175378+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:58.175511+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:37:59.175649+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:00.175829+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:01.175988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:02.176133+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:03.176326+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:04.176574+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:05.176770+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:06.176994+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:07.177158+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:08.177334+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:09.177567+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68739072 unmapped: 1458176 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:10.177764+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:11.177922+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:12.178156+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:13.178419+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 1449984 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:14.178677+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:15.178886+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:16.179113+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:17.179398+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:18.179630+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1441792 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:19.179872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:20.180117+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:21.180345+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:22.180526+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:23.180690+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:24.180829+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:25.180968+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:26.181181+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:27.181369+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:28.181540+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:29.181690+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:30.181829+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 1433600 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:31.181944+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:32.182075+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:33.182334+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:34.182840+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:35.182967+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:36.183132+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:37.183285+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:38.183458+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:39.183840+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:40.183988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:41.184154+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:42.184364+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:43.184502+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 1425408 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:44.184625+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:45.184761+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:46.184964+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:47.185338+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:48.185512+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:49.185636+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:50.185770+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:51.185938+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:52.186109+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:53.186274+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:54.186421+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:55.186582+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:56.186765+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:57.186887+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:58.186991+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:38:59.187119+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:00.187259+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:01.187380+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:02.187621+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:03.187817+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:04.187975+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 1417216 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:05.188129+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:06.188307+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:07.188452+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:08.188574+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:09.188772+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:10.188946+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:11.189127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1400832 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:12.189276+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:13.189453+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:14.189615+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:15.189763+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:16.189942+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:17.190111+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:18.190535+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:19.190710+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:20.190855+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:21.191038+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:22.191242+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:23.191423+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:24.191547+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:25.191724+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:26.191902+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:27.192119+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:28.192238+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:29.192437+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:30.192544+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:31.192676+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:32.192798+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:33.192951+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:34.193137+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:35.193303+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:36.193512+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:37.193665+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:38.194032+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:39.194207+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:40.194420+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:41.194606+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:42.194807+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:43.195007+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:44.195215+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:45.195386+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:46.195531+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:47.195672+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:48.195821+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:49.195994+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 1376256 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:50.196161+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:51.196333+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:52.196517+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:53.196686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:54.196830+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 1368064 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:55.196967+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:56.197559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:57.197733+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:58.197884+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:39:59.198046+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:00.198234+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:01.198394+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 1351680 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:02.198572+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:03.198721+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:04.198882+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:05.199079+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:06.199334+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:07.199538+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:08.199731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:09.199932+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:10.200090+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:11.200256+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:12.200420+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:13.200625+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:14.200768+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 1409024 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:15.200916+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:16.201124+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:17.201345+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:18.202175+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:19.202709+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1392640 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc ms_handle_reset ms_handle_reset con 0x56464d217c00
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: get_auth_request con 0x56464ffd3800 auth_method 0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:20.202855+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:21.202972+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:22.203101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:23.203272+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:24.203410+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:25.203553+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 ms_handle_reset con 0x56464dfb5000 session 0x56464d1ab4a0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffce400
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:26.203750+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:27.203913+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:28.204017+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:29.204173+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:30.204320+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:31.204540+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:32.204688+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:33.204858+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:34.205020+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1228800 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:35.205169+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:36.205319+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:37.205513+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:38.205689+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:39.205872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:40.206140+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:41.206288+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:42.206510+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:43.207192+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:44.207421+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:45.207630+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:46.207823+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:47.208023+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:48.208182+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:49.208373+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:50.208668+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:51.208833+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:52.209036+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:53.209192+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:54.209388+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:55.209535+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:56.209734+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:57.209927+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:58.210049+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:40:59.210179+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:00.210362+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:01.210603+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:02.210832+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:03.210999+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:04.211173+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:05.211354+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:06.211604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:07.211832+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:08.212007+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:09.212169+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:10.212401+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:11.212584+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:12.212784+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:13.212981+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:14.213130+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:15.213296+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:16.213495+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:17.213698+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:18.213852+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:19.214062+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:20.214572+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:21.214730+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:22.214881+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:23.215040+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:24.215180+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:25.215340+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:26.215529+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:27.215661+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:28.215887+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:29.216052+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:30.216216+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:31.216361+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:32.216586+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:33.216731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:34.216890+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:35.217056+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:36.217301+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:37.217451+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 ms_handle_reset con 0x56464e5b8000 session 0x56464e6b4000
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffce800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:38.217657+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:39.217799+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:40.217963+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:41.218113+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:42.218299+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:43.218445+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:44.218531+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:45.218701+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:46.218846+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:47.218970+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:48.219143+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:49.219326+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:50.219532+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:51.219684+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:52.219852+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:53.220064+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:54.220209+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:55.220397+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:56.220548+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:57.220724+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:58.220848+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:41:59.221051+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:00.221235+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:01.221367+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:02.221529+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:03.221730+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:04.221888+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:05.222075+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:06.222277+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:07.222435+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:08.222588+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:09.222728+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:10.222947+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:11.223114+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:12.223320+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:13.223448+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:14.223598+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:15.223718+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:16.223937+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:17.224105+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:18.224263+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:19.224418+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:20.224583+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:21.224731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:22.224872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:23.224988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:24.225141+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:25.225314+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:26.225533+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:27.225684+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:28.225800+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:29.225971+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:30.226101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:31.226254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:32.226424+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:33.226559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:34.226716+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:35.226883+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:36.227068+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:37.227239+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:38.227393+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:39.227538+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:40.227675+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:41.227829+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:42.227998+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:43.228153+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:44.228280+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:45.228443+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:46.228678+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:47.228855+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:48.229008+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:49.229212+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:50.229385+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:51.229562+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:52.229722+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:53.229899+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:54.230030+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:55.230172+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:56.230324+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:57.230520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:58.230653+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:42:59.230808+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:00.230968+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:01.231119+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:02.231243+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:03.231408+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:04.231573+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:05.231753+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:06.231917+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:07.232031+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:08.232178+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:09.232377+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:10.232541+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:11.232695+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:12.232873+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:13.233071+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:14.233245+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:15.233413+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:16.233564+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:17.233696+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:18.233830+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:19.233984+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:20.234124+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:21.234277+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:22.234456+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:23.234691+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:24.234841+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:25.235016+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:26.235233+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:27.235424+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:28.235555+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:29.235737+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:30.236097+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:31.236277+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:32.236421+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:33.236711+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:34.236867+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:35.237048+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:36.237209+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:37.237335+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:38.237571+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:39.237721+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:40.237876+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:41.238047+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:42.238208+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:43.238405+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:44.238521+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:45.238715+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:46.239141+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:47.239297+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:48.239449+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:49.239591+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:50.239710+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:51.239848+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:52.240034+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:53.240178+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:54.240329+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:55.240514+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:56.240738+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:57.240916+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:58.241262+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:43:59.241422+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:00.241563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:01.241679+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:02.241830+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:03.242252+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:04.242405+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1089536 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:05.242577+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:06.242743+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:07.242916+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:08.243152+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:09.243269+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:10.243337+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:11.243452+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:12.243571+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:13.243794+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:14.243900+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:15.244130+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:16.244302+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:17.244466+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:18.244642+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:19.244917+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:20.245231+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:21.245571+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:22.245719+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:23.245931+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:24.246112+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:25.246254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:26.246466+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:27.246691+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:28.246920+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:29.247097+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:30.247232+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:31.247520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:32.247662+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:33.247815+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:34.247975+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:35.248127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:36.248280+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:37.248550+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:38.248716+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:39.248891+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1056768 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:40.249044+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:41.249208+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:42.249341+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:43.249525+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:44.249743+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:45.249944+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:46.250254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:47.250440+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:48.250670+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:49.250981+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:50.251153+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:51.251372+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:52.251585+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:53.251775+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:54.251912+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:55.252107+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:56.252328+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:57.252538+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:58.252676+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:44:59.252836+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:00.253006+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:01.253260+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:02.253440+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:03.253597+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:04.253756+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:05.253930+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:06.254104+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:07.254257+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:08.254418+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:09.254582+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:10.254733+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:11.254898+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5626 writes, 23K keys, 5626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5626 writes, 880 syncs, 6.39 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d1090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56464c3d11f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:12.255041+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:13.255228+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1007616 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:14.255377+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:15.255521+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:16.255659+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:17.255812+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:18.255958+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:19.256146+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:20.256307+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:21.256456+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:22.256686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:23.256833+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:24.257007+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:25.259307+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:26.260514+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:27.263563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:28.265096+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:29.266622+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:30.267622+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:31.267760+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:32.268708+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:33.269021+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:34.269173+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:35.269506+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:36.270038+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:37.270182+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:38.270388+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:39.270695+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:40.270913+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:41.271121+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:42.271339+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:43.271813+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:44.272160+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:45.272345+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:46.272597+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:47.272818+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:48.272987+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:49.273160+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:50.273345+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:51.273567+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:52.273729+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:53.273908+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:54.274090+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:55.274263+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:56.274545+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:57.274770+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:58.274943+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:45:59.275070+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:00.275219+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:01.275647+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:02.275821+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:03.276011+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:04.276218+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:05.276432+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:06.276734+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:07.276932+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:08.277146+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:09.277407+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:10.277635+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:11.277902+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:12.278174+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:13.278448+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:14.278720+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:15.278963+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:16.279244+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:17.279378+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:18.279522+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:19.279725+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:20.279871+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:21.280055+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:22.280248+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:23.280372+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:24.280572+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:25.280769+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:26.281124+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:27.281313+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:28.281559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:29.281783+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:30.281992+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:31.282223+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:32.282431+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:33.282655+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:34.282877+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:35.283100+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:36.283294+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:37.283584+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:38.283784+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:39.283906+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:40.284101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:41.284359+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:42.284570+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:43.284771+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:44.284961+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.158081055s of 599.986328125s, submitted: 106
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:45.285129+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70500352 unmapped: 745472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:46.285314+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:47.285537+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:48.285725+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:49.285914+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:50.286033+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:51.286356+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:52.286602+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:53.286864+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:54.287111+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:55.287364+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:56.287674+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:57.287917+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:58.288140+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:46:59.288398+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:00.288654+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:01.288955+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:02.289108+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:03.289307+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:04.289515+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:05.289674+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:06.289868+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:07.290046+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:08.290229+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:09.290442+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:10.290580+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:11.290780+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:12.290978+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:13.291190+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:14.291382+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:15.291551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:16.291774+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:17.291928+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:18.292082+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:19.292229+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:20.292365+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:21.292533+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:22.292694+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:23.292868+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:24.293051+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:25.293217+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:26.293347+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:27.293467+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:28.293686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:29.293787+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:30.293924+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:31.294056+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:32.294254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:33.294392+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:34.297852+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:35.298032+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:36.298580+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:37.299499+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:38.300158+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:39.300519+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:40.301296+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:41.301463+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:42.302295+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:43.302710+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:44.302953+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:45.303176+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:46.303598+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:47.304153+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:48.304664+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:49.304909+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:50.305336+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:51.305679+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:52.306013+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:53.306356+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:54.306548+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:55.306830+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:56.307163+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:57.307383+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:58.307599+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:47:59.307774+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:00.308014+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:01.308241+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:02.308430+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:03.308676+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:04.308922+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:05.309118+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:06.309357+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:07.309565+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:08.309689+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:09.309839+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:10.309976+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:11.310100+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:12.310394+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:13.310600+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:14.310774+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:15.310940+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:16.311085+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:17.311359+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:18.311581+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:19.311761+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:20.311886+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:21.312079+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:22.312268+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:23.312428+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:24.312599+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:25.312781+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:26.313007+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:27.313217+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:28.313435+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:29.313619+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:30.313766+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:31.313880+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:32.313983+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:33.314173+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:34.314385+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:35.314574+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:36.314800+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:37.314954+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:38.315122+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:39.315235+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:40.315398+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:41.316260+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:42.316460+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:43.316651+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:44.316824+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:45.317026+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:46.317440+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:47.317901+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:48.318308+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:49.318537+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:50.318788+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:51.319214+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:52.319630+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:53.319885+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:54.320128+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:55.320460+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:56.320819+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 507904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:57.321136+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:58.321377+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:48:59.321623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:00.321877+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:01.322193+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:02.322604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:03.322821+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:04.323016+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 499712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:05.323243+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:06.323422+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:07.323646+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:08.323878+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:09.324100+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:10.324315+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:11.324561+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:12.324783+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:13.324951+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70762496 unmapped: 483328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:14.325115+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:15.325312+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:16.325542+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:17.325732+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:18.325885+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:19.326096+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 466944 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:20.326302+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:21.327211+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:22.327383+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:23.327543+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:24.327706+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70770688 unmapped: 475136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:25.327876+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:26.328089+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:27.328389+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:28.328605+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:29.328792+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:30.328986+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:31.329180+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:32.329408+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:33.329598+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:34.329784+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:35.330022+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:36.330280+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:37.330597+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:38.330780+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:39.331009+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:40.331196+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:41.331333+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:42.331593+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:43.331787+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:44.332827+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70787072 unmapped: 458752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:45.332999+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:46.333206+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:47.333376+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:48.333572+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:49.333718+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:50.333875+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xb819b/0x179000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 442368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:51.334166+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464e5b8000
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 401408 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:52.334573+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902851 data_alloc: 218103808 data_used: 204800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 handle_osd_map epochs [129,129], i have 127, src has [1,129]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 187.600646973s of 187.992141724s, submitted: 106
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 376832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:53.334825+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 360448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:54.335231+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fca9d000/0x0/0x4ffc00000, data 0xbb90c/0x180000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 130 ms_handle_reset con 0x56464e5b8000 session 0x56464d9e5860
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 344064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcec00
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:55.335455+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 9527296 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:56.335778+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 131 ms_handle_reset con 0x56464ffcec00 session 0x5646503ea960
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:57.336101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954681 data_alloc: 218103808 data_used: 217088
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:58.336401+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc624000/0x0/0x4ffc00000, data 0x52f084/0x5f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:49:59.336689+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:00.336932+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:01.337137+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fc624000/0x0/0x4ffc00000, data 0x52f084/0x5f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:02.337335+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954841 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:03.337542+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:04.337753+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:05.337995+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:06.338249+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:07.338543+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:08.338693+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:09.338903+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:10.339111+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:11.339318+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:12.339655+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:13.339909+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:14.340170+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:15.340442+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:16.341551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:17.342078+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:18.343353+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:19.344122+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:20.344566+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [3])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:21.345292+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:22.345921+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 10
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:23.346589+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:24.346834+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:25.347130+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:26.347387+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:27.347520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:28.347861+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:29.348012+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:30.348344+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 9535488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 11
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:31.348648+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:32.348842+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:33.349095+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:34.349272+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:35.349596+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:36.349879+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:37.350048+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956615 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:38.350204+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 9469952 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.921653748s of 46.226421356s, submitted: 51
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x530ae7/0x5fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:39.350416+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:40.350573+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:41.350743+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:42.350977+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959589 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc61f000/0x0/0x4ffc00000, data 0x5326cd/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:43.351142+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:44.351379+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc61f000/0x0/0x4ffc00000, data 0x5326cd/0x5fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:45.351628+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:46.351944+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:47.352157+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959589 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:48.353310+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:49.354297+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:50.354980+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.712825775s of 11.826250076s, submitted: 30
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:51.355228+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:52.355456+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962563 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:53.355661+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:54.355994+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:55.356141+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61c000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:56.356371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:57.356522+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961683 data_alloc: 218103808 data_used: 221184
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:58.356683+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:50:59.356846+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:00.357161+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:01.357330+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:02.357503+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:03.357648+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:04.357929+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:05.359665+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:06.359878+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:07.360087+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:08.360246+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:09.360406+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:10.360598+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:11.360738+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc61d000/0x0/0x4ffc00000, data 0x534130/0x601000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:12.360872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962003 data_alloc: 218103808 data_used: 229376
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:13.361016+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.769683838s of 22.780818939s, submitted: 2
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:14.361171+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:15.361320+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc619000/0x0/0x4ffc00000, data 0x535d16/0x604000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:16.361524+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:17.361708+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 9560064 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966177 data_alloc: 218103808 data_used: 237568
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcf000
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:18.361899+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 9551872 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fc618000/0x0/0x4ffc00000, data 0x535d75/0x605000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:19.362184+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 9551872 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:20.362384+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 9543680 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:21.362558+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 9543680 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:22.362778+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973263 data_alloc: 218103808 data_used: 245760
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:23.362903+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:24.363066+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc612000/0x0/0x4ffc00000, data 0x5378d2/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:25.363230+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:26.363458+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.499263763s of 12.844394684s, submitted: 39
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 9502720 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:27.363648+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972255 data_alloc: 218103808 data_used: 245760
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:28.363817+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53789f/0x609000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:29.363982+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 9486336 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:30.364178+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:31.364325+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:32.364437+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974151 data_alloc: 218103808 data_used: 245760
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:33.364617+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 9510912 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc612000/0x0/0x4ffc00000, data 0x53796d/0x60a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:34.364770+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 9494528 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:35.365101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:36.365300+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc611000/0x0/0x4ffc00000, data 0x537a70/0x60c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 9461760 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.939931870s of 10.428670883s, submitted: 23
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:37.365466+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x53790e/0x60a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 9453568 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973367 data_alloc: 218103808 data_used: 245760
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:38.365623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 136 handle_osd_map epochs [137,138], i have 136, src has [1,138]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 9445376 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:39.365779+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 9428992 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:40.365946+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 8372224 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:41.366131+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 8372224 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:42.366353+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc60c000/0x0/0x4ffc00000, data 0x53cc76/0x611000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 8364032 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983877 data_alloc: 218103808 data_used: 253952
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:43.366523+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 8364032 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:44.366667+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 8355840 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:45.366851+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8306688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:46.367050+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc60c000/0x0/0x4ffc00000, data 0x53cd11/0x612000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 8306688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:47.367290+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.984500885s of 10.786389351s, submitted: 89
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 7290880 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990059 data_alloc: 218103808 data_used: 262144
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:48.367454+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 7282688 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:49.367607+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:50.367773+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc606000/0x0/0x4ffc00000, data 0x54056a/0x617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:51.367932+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 7266304 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x5404cf/0x616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:52.368084+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x5404cf/0x616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 7258112 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994143 data_alloc: 218103808 data_used: 262144
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:53.368308+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 7241728 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:54.368593+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:55.368845+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:56.369181+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:57.369371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.349452972s of 10.004354477s, submitted: 108
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998037 data_alloc: 218103808 data_used: 262144
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:58.369581+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:51:59.369729+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:00.370087+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:01.370292+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:02.370552+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc600000/0x0/0x4ffc00000, data 0x543c8f/0x61d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:03.370685+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:04.370981+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:05.371159+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:06.371445+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:07.371567+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:08.371724+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:09.371860+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:10.372119+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:11.372371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:12.372581+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:13.372813+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.664479256s of 16.246137619s, submitted: 23
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:14.372990+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:15.373161+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:16.373377+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:17.373633+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fe000/0x0/0x4ffc00000, data 0x545677/0x61f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000961 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:18.373867+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 7233536 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:19.374158+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:20.374394+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fc000/0x0/0x4ffc00000, data 0x5457db/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:21.374610+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:22.374826+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 7176192 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003745 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:23.375107+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:24.375297+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x5457d9/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:25.375490+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x5457d9/0x621000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:26.375738+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:27.375935+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003441 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:28.376161+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.022244453s of 15.054928780s, submitted: 7
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:29.376375+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 7151616 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:30.376623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:31.376803+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x545712/0x620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:32.377008+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002703 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:33.377208+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:34.377367+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:35.377530+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:36.377733+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x545712/0x620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 7143424 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:37.377873+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008547 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 6750208 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:38.378065+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 6750208 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:39.378264+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.203021049s of 10.306691170s, submitted: 11
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 4472832 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:40.378583+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 4308992 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:41.378794+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 3964928 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:42.379018+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fc5ad000/0x0/0x4ffc00000, data 0x596977/0x671000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016145 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 3784704 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:43.379286+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 3629056 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:44.379466+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 3465216 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:45.379646+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 3293184 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:46.380686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 78520320 unmapped: 3088384 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:47.380862+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018071 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 79667200 unmapped: 1941504 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:48.381066+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb3a6000/0x0/0x4ffc00000, data 0x5fbde3/0x6d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81010688 unmapped: 598016 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:49.381244+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.658164978s of 10.665602684s, submitted: 67
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 319488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:50.381407+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 319488 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:51.381551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 458752 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:52.381762+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019981 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb36c000/0x0/0x4ffc00000, data 0x6366ff/0x712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 155648 heap: 81608704 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:53.381927+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 1064960 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:54.382084+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 868352 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:55.382222+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 12
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1040384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:56.382403+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb325000/0x0/0x4ffc00000, data 0x67b94e/0x759000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 1040384 heap: 82657280 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:57.382549+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x56464ffcfc00
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031287 data_alloc: 218103808 data_used: 282624
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 1851392 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:58.382721+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83025920 unmapped: 679936 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:52:59.382874+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 507904 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:00.383107+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb2c7000/0x0/0x4ffc00000, data 0x6dac84/0x7b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.196979523s of 10.725030899s, submitted: 63
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 376832 heap: 83705856 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:01.383307+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 2064384 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:02.383534+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb2ab000/0x0/0x4ffc00000, data 0x6f813d/0x7d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1035961 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 942080 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:03.383736+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 942080 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:04.383919+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 1556480 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:05.384093+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83197952 unmapped: 1556480 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:06.384284+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb27c000/0x0/0x4ffc00000, data 0x726a7d/0x802000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83206144 unmapped: 1548288 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:07.384448+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1038689 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb244000/0x0/0x4ffc00000, data 0x75e463/0x83a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83083264 unmapped: 1671168 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:08.384733+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 1581056 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb236000/0x0/0x4ffc00000, data 0x76b99a/0x848000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:09.384980+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83173376 unmapped: 1581056 heap: 84754432 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:10.385194+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:11.385348+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb218000/0x0/0x4ffc00000, data 0x78a573/0x866000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:12.385550+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.899141312s of 11.591003418s, submitted: 63
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043809 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 1933312 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb20e000/0x0/0x4ffc00000, data 0x79430d/0x870000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:13.385722+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 1794048 heap: 85803008 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:14.385864+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb20e000/0x0/0x4ffc00000, data 0x794246/0x86f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 2727936 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:15.386070+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84172800 unmapped: 2678784 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:16.386291+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 2449408 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:17.386445+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043423 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 2277376 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:18.386663+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 2269184 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:19.386862+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 2105344 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:20.387092+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb1b1000/0x0/0x4ffc00000, data 0x7f1f1e/0x8cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 2441216 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:21.387314+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 2359296 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:22.387593+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.556157112s of 10.000567436s, submitted: 51
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb16b000/0x0/0x4ffc00000, data 0x836c5b/0x913000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050139 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:23.387761+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:24.387917+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 1769472 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:25.388039+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 1630208 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:26.388235+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 1327104 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:27.388369+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1060111 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 1171456 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:28.388553+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb12a000/0x0/0x4ffc00000, data 0x8784c4/0x954000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:29.388732+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:30.388879+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 1515520 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:31.389024+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 1351680 heap: 86851584 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:32.389172+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.619530678s of 10.068819046s, submitted: 44
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063633 data_alloc: 218103808 data_used: 278528
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2138112 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:33.389356+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 2138112 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:34.389562+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0b8000/0x0/0x4ffc00000, data 0x8eaa2f/0x9c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 1859584 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:35.389745+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0b8000/0x0/0x4ffc00000, data 0x8eaa2f/0x9c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0x8fb6d4/0x9d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87064576 unmapped: 835584 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:36.389941+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb099000/0x0/0x4ffc00000, data 0x90a4e3/0x9e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87293952 unmapped: 606208 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:37.390085+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066459 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87646208 unmapped: 253952 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:38.390268+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 188416 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:39.390435+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87711744 unmapped: 188416 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:40.390610+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:41.390777+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:42.390915+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067111 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:43.391055+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:44.391198+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:45.391344+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:46.391582+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:47.391729+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb05a000/0x0/0x4ffc00000, data 0x9472f8/0xa23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 145 handle_osd_map epochs [146,147], i have 145, src has [1,147]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 145 handle_osd_map epochs [146,146], i have 147, src has [1,147]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.992734909s of 15.773614883s, submitted: 64
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074259 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:48.391926+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 106496 heap: 87900160 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:49.392107+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87244800 unmapped: 1703936 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:50.392286+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb054000/0x0/0x4ffc00000, data 0x94a98d/0xa29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:51.392416+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:52.392569+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071831 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:53.392725+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87277568 unmapped: 1671168 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fb054000/0x0/0x4ffc00000, data 0x94a98d/0xa29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:54.392910+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:55.393076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:56.393240+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:57.393422+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073411 data_alloc: 218103808 data_used: 286720
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:58.393586+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:53:59.393749+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:00.393932+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:01.394061+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:02.394214+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073731 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:03.394403+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:04.394563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:05.394682+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:06.394865+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:07.394980+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:08.395137+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073731 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:09.395280+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.338314056s of 21.669164658s, submitted: 42
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:10.395568+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:11.395744+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:12.395918+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c410/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:13.396071+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075499 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87285760 unmapped: 1662976 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 ms_handle_reset con 0x56464ffcfc00 session 0x56464f8ab680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:14.396225+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 13
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:15.396400+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c410/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:16.396639+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:17.396773+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:18.396944+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:19.397116+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:20.397287+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:21.397425+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:22.397562+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:23.397692+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:24.397799+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:25.397911+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:26.398127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:27.398274+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1212416 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.731567383s of 17.763555527s, submitted: 136
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:28.398437+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075649 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c43d/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:29.398618+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:30.398839+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:31.398999+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:32.399180+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:33.399338+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c43b/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:34.399542+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87752704 unmapped: 1196032 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c43b/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:35.399713+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:36.399894+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:37.400063+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:38.400209+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:39.400341+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:40.400601+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.505785942s of 13.548912048s, submitted: 7
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:41.400749+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:42.400926+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:43.401080+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:44.401243+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:45.401393+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:46.401553+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:47.401730+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:48.401945+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074783 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:49.402095+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:50.402233+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:51.402449+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:52.402645+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:53.402786+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073753 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:54.402997+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb053000/0x0/0x4ffc00000, data 0x94c375/0xa2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87760896 unmapped: 1187840 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.486737251s of 13.497967720s, submitted: 2
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:55.403181+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:56.403383+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:57.403568+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:58.403694+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:54:59.403861+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87769088 unmapped: 1179648 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:00.404028+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:01.404227+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:02.404383+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:03.404543+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:04.404699+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:05.404889+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:06.406508+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:07.406864+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:08.407050+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb052000/0x0/0x4ffc00000, data 0x94c3d4/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075521 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:09.407429+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.252617836s of 15.257152557s, submitted: 1
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:10.407955+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87777280 unmapped: 1171456 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:11.408124+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c3a5/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7325 writes, 29K keys, 7325 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7325 writes, 1543 syncs, 4.75 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1699 writes, 5316 keys, 1699 commit groups, 1.0 writes per commit group, ingest: 7.07 MB, 0.01 MB/s
                                           Interval WAL: 1699 writes, 663 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:12.408296+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:13.408586+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077241 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:14.408837+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c46b/0xa2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:15.409022+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:16.409181+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 1163264 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:17.409360+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:18.409629+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077113 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c46b/0xa2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:19.409894+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87793664 unmapped: 1155072 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc ms_handle_reset ms_handle_reset con 0x56464ffd3800
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2223829226
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: get_auth_request con 0x56465051d000 auth_method 0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_configure stats_period=5
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:20.410075+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:21.410279+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:22.410462+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:23.410673+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076375 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb051000/0x0/0x4ffc00000, data 0x94c3a5/0xa2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:24.410840+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:25.411047+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87941120 unmapped: 1007616 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 ms_handle_reset con 0x56464ffce400 session 0x56464da092c0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x564651636000
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:26.411318+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 87949312 unmapped: 999424 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.098913193s of 17.159513474s, submitted: 11
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:27.411509+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 925696 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:28.411739+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083361 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88023040 unmapped: 925696 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:29.411890+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fb033000/0x0/0x4ffc00000, data 0x9696cb/0xa4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 811008 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:30.412029+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88154112 unmapped: 794624 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:31.412180+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88170496 unmapped: 778240 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:32.412325+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:33.412526+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085661 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:34.412666+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 88383488 unmapped: 565248 heap: 88948736 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafff000/0x0/0x4ffc00000, data 0x99e5f5/0xa7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:35.412784+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 1695744 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:36.412935+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 1695744 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.479039192s of 10.263687134s, submitted: 42
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:37.413037+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89505792 unmapped: 1540096 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:38.413279+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090171 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:39.413425+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:40.413669+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafce000/0x0/0x4ffc00000, data 0x9d01c7/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:41.413868+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafce000/0x0/0x4ffc00000, data 0x9d01c7/0xab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89710592 unmapped: 1335296 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:42.414095+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 1802240 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:43.414309+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088099 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89243648 unmapped: 1802240 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafb4000/0x0/0x4ffc00000, data 0x9ea131/0xaca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:44.414455+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:45.414698+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:46.414939+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1687552 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.607292175s of 10.000217438s, submitted: 7
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:47.415086+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:48.415312+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088335 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:49.415558+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:50.415755+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:51.415924+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:52.416143+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:53.416326+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088335 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:54.416546+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:55.416826+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:56.417054+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89391104 unmapped: 1654784 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:57.417238+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.095285416s of 10.118075371s, submitted: 5
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fafa5000/0x0/0x4ffc00000, data 0x9f8ee7/0xad9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xa0788e/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:58.417411+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090539 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:55:59.417586+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:00.417853+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:01.418048+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf95000/0x0/0x4ffc00000, data 0xa0788e/0xae9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89440256 unmapped: 1605632 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:02.418231+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:03.418405+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf7f000/0x0/0x4ffc00000, data 0xa1db4a/0xaff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092351 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 1515520 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:04.418552+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89636864 unmapped: 1409024 heap: 91045888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:05.418672+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 2670592 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:06.418925+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf4f000/0x0/0x4ffc00000, data 0xa4c367/0xb2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 2670592 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:07.419055+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89530368 unmapped: 2564096 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:08.419252+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099679 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 89686016 unmapped: 2408448 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.348463058s of 11.521899223s, submitted: 32
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faf3b000/0x0/0x4ffc00000, data 0xa5f9ac/0xb42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [0,0,2,1])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:09.419391+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 999424 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:10.419624+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 999424 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:11.419776+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91267072 unmapped: 827392 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:12.419949+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 1007616 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:13.420084+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102317 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91086848 unmapped: 1007616 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:14.420206+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faedf000/0x0/0x4ffc00000, data 0xabd3f8/0xb9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91029504 unmapped: 1064960 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:15.420403+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 90701824 unmapped: 1392640 heap: 92094464 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:16.420968+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 90963968 unmapped: 2179072 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:17.421122+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae8a000/0x0/0x4ffc00000, data 0xb10d52/0xbf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:18.421281+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae8a000/0x0/0x4ffc00000, data 0xb10d52/0xbf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107149 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.729861259s of 10.001068115s, submitted: 57
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:19.421433+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 91250688 unmapped: 1892352 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:20.421563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 548864 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:21.421709+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 548864 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:22.421835+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 385024 heap: 93143040 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:23.422176+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109021 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2121728 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:24.422391+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fae26000/0x0/0x4ffc00000, data 0xb76964/0xc58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93118464 unmapped: 2121728 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:25.422611+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2097152 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:26.422851+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93143040 unmapped: 2097152 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:27.422974+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2080768 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:28.423162+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa9cd000/0x0/0x4ffc00000, data 0xbbf591/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116113 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93159424 unmapped: 2080768 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa9cd000/0x0/0x4ffc00000, data 0xbbf591/0xca1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:29.423302+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.608607292s of 10.946014404s, submitted: 51
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93241344 unmapped: 1998848 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:30.423450+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 92717056 unmapped: 2523136 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:31.423667+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93880320 unmapped: 1359872 heap: 95240192 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:32.423799+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93888512 unmapped: 2400256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:33.423955+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115413 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93888512 unmapped: 2400256 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:34.424115+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa96d000/0x0/0x4ffc00000, data 0xc1f447/0xd01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94044160 unmapped: 2244608 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:35.424265+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1982464 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:36.424501+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94306304 unmapped: 1982464 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:37.424652+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa93b000/0x0/0x4ffc00000, data 0xc51e55/0xd33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94625792 unmapped: 1662976 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:38.424833+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121513 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93782016 unmapped: 2506752 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:39.425021+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa8ea000/0x0/0x4ffc00000, data 0xca2fa3/0xd84000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 2449408 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:40.425181+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 93839360 unmapped: 2449408 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:41.425352+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.250131607s of 11.549646378s, submitted: 67
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 1253376 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:42.425521+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 1236992 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:43.425723+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136145 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 1073152 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:44.425874+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 1040384 heap: 96288768 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:45.426000+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa873000/0x0/0x4ffc00000, data 0xd17636/0xdfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95322112 unmapped: 2015232 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:46.426127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95371264 unmapped: 1966080 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:47.426299+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:48.426412+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132769 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:49.426623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94896128 unmapped: 2441216 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa83f000/0x0/0x4ffc00000, data 0xd4c576/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:50.426767+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94928896 unmapped: 2408448 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:51.426959+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:52.427200+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:53.427414+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132079 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.240543365s of 12.828203201s, submitted: 151
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:54.427780+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:55.427971+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:56.428314+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94937088 unmapped: 2400256 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:57.428465+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2392064 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:58.428621+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129741 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 2392064 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:56:59.428808+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:00.428989+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:01.429108+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:02.429295+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:03.429566+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129741 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:04.429794+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:05.429941+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.993387222s of 12.013343811s, submitted: 2
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:06.430169+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa841000/0x0/0x4ffc00000, data 0xd4c440/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:07.430387+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:08.430670+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa842000/0x0/0x4ffc00000, data 0xd4c3a5/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129051 data_alloc: 218103808 data_used: 294912
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94953472 unmapped: 2383872 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:09.430864+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:10.431028+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa842000/0x0/0x4ffc00000, data 0xd4c3a5/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:11.431215+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:12.431389+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:13.431557+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134993 data_alloc: 218103808 data_used: 303104
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:14.431754+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:15.431964+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94961664 unmapped: 2375680 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:16.432214+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0xd4e026/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:17.432371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:18.432564+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1134993 data_alloc: 218103808 data_used: 303104
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 2342912 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:19.432737+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83d000/0x0/0x4ffc00000, data 0xd4e026/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:20.432924+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.021748543s of 14.105439186s, submitted: 21
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:21.433064+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:22.433214+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:23.433368+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132345 data_alloc: 218103808 data_used: 303104
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95027200 unmapped: 2310144 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:24.433564+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa83f000/0x0/0x4ffc00000, data 0xd4df8b/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 149 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:25.433730+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:26.433898+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: handle_auth_request added challenge on 0x564651636400
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95035392 unmapped: 2301952 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 14
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:27.434047+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 2285568 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:28.434246+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139175 data_alloc: 218103808 data_used: 311296
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95051776 unmapped: 2285568 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:29.434412+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb9a/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95076352 unmapped: 2260992 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:30.434570+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb20/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:31.434799+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:32.434960+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:33.435226+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139479 data_alloc: 218103808 data_used: 323584
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:34.435416+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95084544 unmapped: 2252800 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:35.435629+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.750409126s of 15.925973892s, submitted: 16
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:36.435872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83a000/0x0/0x4ffc00000, data 0xd4fb20/0xe34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:37.436059+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:38.436187+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136861 data_alloc: 218103808 data_used: 319488
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:39.436385+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:40.436548+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:41.436691+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa83c000/0x0/0x4ffc00000, data 0xd4f9ee/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:42.436859+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:43.436985+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141035 data_alloc: 218103808 data_used: 327680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:44.437164+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:45.437271+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:46.437454+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:47.438717+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:48.438896+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa838000/0x0/0x4ffc00000, data 0xd51604/0xe35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141035 data_alloc: 218103808 data_used: 327680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:49.439076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:50.439253+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.859284401s of 15.013036728s, submitted: 41
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:51.439417+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:52.439563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:53.439672+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144009 data_alloc: 218103808 data_used: 327680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:54.439829+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa835000/0x0/0x4ffc00000, data 0xd53087/0xe38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:55.439974+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 2211840 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:56.440130+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:57.440258+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:58.440424+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145777 data_alloc: 218103808 data_used: 327680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:57:59.440552+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:00.440714+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:01.440863+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:02.440983+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa834000/0x0/0x4ffc00000, data 0xd53122/0xe39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:03.441135+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145777 data_alloc: 218103808 data_used: 327680
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:04.441321+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:05.441454+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.953479767s of 14.017258644s, submitted: 14
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:06.441671+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:07.441803+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95092736 unmapped: 2244608 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa836000/0x0/0x4ffc00000, data 0xd53087/0xe38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:08.441945+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa832000/0x0/0x4ffc00000, data 0xd54c6d/0xe3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149071 data_alloc: 218103808 data_used: 335872
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:09.442095+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:10.442266+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 153 heartbeat osd_stat(store_statfs(0x4fa831000/0x0/0x4ffc00000, data 0xd54d08/0xe3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:11.442424+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95100928 unmapped: 2236416 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:12.442561+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95109120 unmapped: 2228224 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:13.442731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95125504 unmapped: 2211840 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154243 data_alloc: 218103808 data_used: 344064
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:14.442909+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95133696 unmapped: 2203648 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58312/0xe41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58312/0xe41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:15.443057+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:16.443255+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:17.443514+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:18.443718+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95141888 unmapped: 2195456 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.397949219s of 13.738856316s, submitted: 53
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156227 data_alloc: 218103808 data_used: 344064
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:19.443871+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:20.444132+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa82b000/0x0/0x4ffc00000, data 0xd58519/0xe43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:21.444506+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:22.444702+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95150080 unmapped: 2187264 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:23.444889+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa827000/0x0/0x4ffc00000, data 0xd5a13f/0xe46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163855 data_alloc: 218103808 data_used: 356352
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3255712327' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:24.445352+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa824000/0x0/0x4ffc00000, data 0xd5badd/0xe48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:25.445720+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:26.446419+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2154496 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:27.446623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:28.447076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa825000/0x0/0x4ffc00000, data 0xd5bbef/0xe49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:29.447410+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163541 data_alloc: 218103808 data_used: 360448
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.204563141s of 10.300806046s, submitted: 34
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:30.447566+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:31.447867+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa826000/0x0/0x4ffc00000, data 0xd5b9e7/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:32.448037+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:33.448286+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95207424 unmapped: 2129920 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa826000/0x0/0x4ffc00000, data 0xd5b9e7/0xe47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:34.448450+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 2121728 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:35.448632+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95215616 unmapped: 2121728 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:36.448803+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95223808 unmapped: 2113536 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:37.449003+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:38.449172+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:39.449404+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:40.449570+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:41.449801+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:42.449967+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:43.450163+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:44.450427+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:45.450604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:46.450773+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:47.450968+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:48.451172+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:49.451393+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1165097 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:50.451695+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:51.451889+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95232000 unmapped: 2105344 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:52.452068+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95248384 unmapped: 2088960 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 heartbeat osd_stat(store_statfs(0x4fa823000/0x0/0x4ffc00000, data 0xd5d46a/0xe4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.776588440s of 23.795372009s, submitted: 13
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:53.452191+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:54.452350+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168071 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa820000/0x0/0x4ffc00000, data 0xd5f050/0xe4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:55.452634+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:56.452862+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:57.453022+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:58.453160+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95256576 unmapped: 2080768 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:58:59.453302+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168071 data_alloc: 218103808 data_used: 364544
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95264768 unmapped: 2072576 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa820000/0x0/0x4ffc00000, data 0xd5f050/0xe4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:00.453564+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:01.453786+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:02.453993+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95272960 unmapped: 2064384 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 159 handle_osd_map epochs [160,161], i have 159, src has [1,161]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.872550011s of 10.118376732s, submitted: 39
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:03.454136+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95305728 unmapped: 2031616 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:04.454281+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180155 data_alloc: 218103808 data_used: 376832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2007040 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:05.454458+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa817000/0x0/0x4ffc00000, data 0xd62886/0xe56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2007040 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:06.454658+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1974272 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:07.454784+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95363072 unmapped: 1974272 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:08.454919+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa815000/0x0/0x4ffc00000, data 0xd643d1/0xe58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:09.455079+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179967 data_alloc: 218103808 data_used: 376832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:10.455187+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:11.455316+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa816000/0x0/0x4ffc00000, data 0xd64336/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:12.455578+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa816000/0x0/0x4ffc00000, data 0xd643d1/0xe58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:13.455754+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 1957888 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.599443436s of 10.774303436s, submitted: 54
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:14.455896+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185029 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:15.456087+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:16.456252+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:17.456403+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96444416 unmapped: 892928 heap: 97337344 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa813000/0x0/0x4ffc00000, data 0xd65d99/0xe5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:18.456544+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 1916928 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:19.456715+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187137 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 1916928 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:20.456861+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:21.457133+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:22.457323+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 1908736 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:23.457535+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96485376 unmapped: 1900544 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:24.457704+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187137 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa810000/0x0/0x4ffc00000, data 0xd679af/0xe5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.359882355s of 10.512098312s, submitted: 50
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa80d000/0x0/0x4ffc00000, data 0xd69432/0xe60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:25.457920+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:26.458129+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa80e000/0x0/0x4ffc00000, data 0xd69397/0xe5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:27.458280+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:28.458440+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 96493568 unmapped: 1892352 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 165 handle_osd_map epochs [166,167], i have 165, src has [1,167]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:29.458603+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195897 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 827392 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:30.458785+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 827392 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:31.458907+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 819200 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:32.459048+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa807000/0x0/0x4ffc00000, data 0xd6cc2e/0xe66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 819200 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:33.459208+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 811008 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:34.459373+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197665 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97574912 unmapped: 811008 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.567685127s of 10.729619026s, submitted: 50
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:35.459524+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:36.459769+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:37.459883+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:38.460006+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fa804000/0x0/0x4ffc00000, data 0xd6e6b1/0xe69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:39.460134+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201703 data_alloc: 218103808 data_used: 385024
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:40.460338+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:41.460461+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:42.460636+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97583104 unmapped: 802816 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:43.460845+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 168 ms_handle_reset con 0x564651636400 session 0x56464e6b4780
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:44.461000+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 15
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:45.461104+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:46.461271+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:47.461461+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:48.461633+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:49.461793+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:50.461975+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:51.462158+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa802000/0x0/0x4ffc00000, data 0xd701fc/0xe6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:52.462338+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:53.462624+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:54.462777+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202539 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97894400 unmapped: 491520 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:55.462978+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _renew_subs
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.953063965s of 20.050271988s, submitted: 204
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:56.463205+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:57.463384+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:58.463559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T05:59:59.463709+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205513 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:00.463859+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:01.464076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:02.464266+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:03.464408+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 483328 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:04.464785+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205513 data_alloc: 218103808 data_used: 393216
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:05.464969+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:06.465191+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:07.465520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:08.465724+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:09.465938+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:10.466081+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:11.466282+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:12.466452+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:13.466703+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:14.466889+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:15.467149+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:16.467446+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:17.467584+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:18.467725+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:19.467910+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 475136 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:20.468045+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:21.468190+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:22.468414+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:23.468613+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:24.468804+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:25.468982+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:26.469206+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:27.469363+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:28.469632+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:29.470056+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:30.470397+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:31.470663+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:32.470895+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:33.471869+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:34.472156+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97918976 unmapped: 466944 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:35.472347+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:36.472549+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:37.472911+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:38.473333+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:39.473580+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205673 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:40.473721+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:41.474080+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:42.474346+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 97927168 unmapped: 458752 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:43.474587+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa7ff000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.334320068s of 48.347373962s, submitted: 15
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 ms_handle_reset con 0x56464ffcf000 session 0x5646504241e0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:44.474845+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Got map version 16
Nov 22 06:10:22 compute-0 ceph-osd[89779]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2223829226,v1:192.168.122.100:6801/2223829226]
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:45.475010+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:46.475310+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:47.475616+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:48.475854+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:49.476014+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98050048 unmapped: 335872 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:50.476152+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:51.476367+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:52.476520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:53.476806+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:54.477046+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:55.477211+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:56.477530+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:57.477758+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:58.477972+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:00:59.478217+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:00.478371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:01.478600+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:02.479195+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:03.480175+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:04.480743+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:05.480862+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:06.481101+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:07.481362+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:08.481559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:09.481711+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:10.481906+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:11.482026+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:12.482142+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:13.482270+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:14.482399+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:15.482540+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:16.482703+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:17.482813+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:18.482956+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:19.483108+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:20.483274+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:21.483393+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98066432 unmapped: 319488 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:22.483549+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98058240 unmapped: 327680 heap: 98385920 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:23.483675+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98246656 unmapped: 1187840 heap: 99434496 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:24.483853+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98336768 unmapped: 2146304 heap: 100483072 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:25.483977+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98385920 unmapped: 2097152 heap: 100483072 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:26.484128+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf dump' '{prefix=perf dump}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf schema' '{prefix=perf schema}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:27.484235+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:28.484342+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:29.484483+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:30.484644+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:31.484766+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:32.484913+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:33.485080+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98500608 unmapped: 13025280 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:34.485254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 13017088 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:35.485373+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 13017088 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:36.485530+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 13017088 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:37.485646+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 13017088 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:38.485752+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98508800 unmapped: 13017088 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:39.485951+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:40.486072+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:41.486191+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:42.486323+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:43.486455+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:44.486592+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:45.486705+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:46.486831+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:47.486936+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:48.487115+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:49.487257+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:50.487413+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:51.487578+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:52.487731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:53.487891+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:54.488074+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:55.488260+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:56.488510+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:57.488686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:58.488889+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:01:59.489082+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:00.489217+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:01.489390+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:02.489546+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:03.489700+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98516992 unmapped: 13008896 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:04.489911+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:05.490067+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:06.490987+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:07.491200+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:08.491462+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:09.491830+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98525184 unmapped: 13000704 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:10.492052+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:11.492298+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:12.492604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:13.492819+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:14.493055+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:15.493239+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:16.493529+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:17.493699+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:18.493869+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:19.494004+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98533376 unmapped: 12992512 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:20.494178+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:21.494355+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:22.494567+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:23.494736+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:24.494918+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:25.495108+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:26.495360+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:27.495535+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98549760 unmapped: 12976128 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:28.495685+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:29.495834+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:30.495955+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:31.496114+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:32.496278+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:33.496520+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:34.496697+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:35.496872+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:36.497069+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:37.497228+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:38.497397+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:39.497619+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98557952 unmapped: 12967936 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:40.497824+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:41.497997+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:42.498218+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:43.498428+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:44.498635+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:45.498827+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:46.499118+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:47.499321+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:48.499510+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:49.499671+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:50.499853+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:51.500012+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:52.500190+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:53.500388+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:54.500538+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98574336 unmapped: 12951552 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:55.500678+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98582528 unmapped: 12943360 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:56.500891+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98582528 unmapped: 12943360 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:57.501062+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98582528 unmapped: 12943360 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:58.501248+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98582528 unmapped: 12943360 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:02:59.501433+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98582528 unmapped: 12943360 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:00.501625+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:01.501780+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:02.501948+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:03.502184+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:04.502446+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:05.502654+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:06.502923+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:07.503135+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:08.503365+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:09.504211+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:10.504388+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:11.505268+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:12.505746+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:13.506432+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:14.507039+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:15.507405+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:16.507850+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:17.508254+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:18.508406+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:19.508710+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98607104 unmapped: 12918784 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:20.508969+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:21.509306+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:22.509546+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:23.509770+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:24.509988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:25.510192+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:26.510728+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:27.510913+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:28.511094+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:29.511284+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:30.511449+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:31.511660+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:32.511834+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:33.512035+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:34.512207+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:35.512399+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:36.512711+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:37.512892+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:38.513078+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:39.513214+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:40.513352+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:41.513554+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:42.513721+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:43.513860+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:44.514077+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:45.514428+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:46.514811+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:47.515076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:48.515378+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:49.515601+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:50.515892+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:51.516166+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:52.516422+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:53.516753+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:54.517076+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:55.517235+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:56.517438+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:57.517613+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:58.517827+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:03:59.517990+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:00.518201+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:01.518419+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:02.518607+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:03.518789+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:04.518973+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:05.519136+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:06.519341+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:07.519530+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:08.519716+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:09.519877+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:10.520040+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:11.520301+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:12.520517+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:13.520684+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:14.520879+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:15.521049+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:16.521230+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:17.521370+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:18.521542+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:19.521747+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:20.521852+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:21.522008+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:22.522196+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:23.522392+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:24.522534+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:25.522849+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:26.523087+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98656256 unmapped: 12869632 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:27.523284+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:28.523564+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:29.523745+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:30.523926+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:31.524108+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:32.524296+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:33.524446+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:34.524608+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:35.524859+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:36.525148+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:37.525337+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:38.525510+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:39.525657+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:40.525815+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:41.526004+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:42.526163+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:43.526376+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:44.526517+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:45.526680+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:46.527147+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:47.527276+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:48.527412+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:49.527806+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:50.527963+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:51.528593+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:52.528726+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:53.529253+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:54.529548+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:55.529973+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:56.530395+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:57.530584+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:58.530988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 12836864 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:04:59.531342+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:00.531549+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:01.531746+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:02.531984+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:03.532248+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:04.532458+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:05.532690+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:06.532954+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:07.533147+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:08.533354+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:09.533554+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:10.533730+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:11.533925+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9147 writes, 34K keys, 9147 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9147 writes, 2199 syncs, 4.16 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1822 writes, 5236 keys, 1822 commit groups, 1.0 writes per commit group, ingest: 6.79 MB, 0.01 MB/s
                                           Interval WAL: 1822 writes, 656 syncs, 2.78 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:12.534145+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:13.534309+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:14.534563+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:15.534774+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:16.534991+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:17.535216+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98705408 unmapped: 12820480 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:18.535387+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98713600 unmapped: 12812288 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:19.535621+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:20.535813+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:21.536013+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:22.536234+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:23.536407+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:24.536604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:25.536793+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:26.537140+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:27.537343+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:28.537539+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:29.537636+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:30.537785+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:31.537883+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:32.538079+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:33.538220+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:34.538410+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:35.538622+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:36.538813+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:37.540230+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:38.540439+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98729984 unmapped: 12795904 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:39.540645+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:40.540789+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:41.540925+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:42.541029+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:43.541177+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:44.541349+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:45.541565+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:46.541780+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:47.541968+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:48.542159+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:49.542312+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:50.542458+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:51.542758+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:52.542926+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:53.543065+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:54.543212+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:55.543349+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:56.543527+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:57.543649+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:58.543799+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98754560 unmapped: 12771328 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:05:59.544001+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 12754944 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:00.544115+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 12754944 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:01.544253+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 12754944 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:02.544410+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 12754944 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:03.544558+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98770944 unmapped: 12754944 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:04.544733+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:05.544873+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:06.545045+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:07.545192+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:08.545392+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:09.545560+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:10.545875+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:11.546019+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:12.546174+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:13.546360+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:14.546524+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:15.546660+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:16.546871+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:17.547081+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:18.547294+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:19.547522+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98779136 unmapped: 12746752 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:20.547707+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:21.547890+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:22.548115+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:23.548344+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:24.548745+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:25.549924+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:26.550241+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98795520 unmapped: 12730368 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:27.551926+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:28.552228+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:29.552527+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:30.552690+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:31.552999+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:32.553252+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:33.553769+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:34.553948+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:35.554284+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:36.554664+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:37.554879+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:38.555119+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:39.555604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98803712 unmapped: 12722176 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:40.555803+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 12705792 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:41.555985+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 12705792 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:42.556164+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 12705792 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:43.556325+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98820096 unmapped: 12705792 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:44.556534+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98828288 unmapped: 12697600 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 361.412506104s of 361.432830811s, submitted: 144
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:45.556699+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98680832 unmapped: 12845056 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:46.556950+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:47.557153+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:48.557323+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:49.557457+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:50.557623+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:51.557811+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98623488 unmapped: 12902400 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:52.557962+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:53.558073+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:54.558262+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:55.558539+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:56.559000+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:57.560871+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:58.561215+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:06:59.563627+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:00.564085+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:01.564526+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:02.564941+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:03.577973+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:04.578091+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:05.578224+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:06.578376+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:07.578549+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:08.578696+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:09.578873+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:10.579013+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:11.579575+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:12.580088+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:13.580290+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:14.580551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:15.580759+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:16.580946+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:17.581127+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:18.581302+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:19.581539+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:20.581666+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:21.581900+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:22.582318+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:23.582624+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:24.582796+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:25.582950+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:26.583184+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:27.583379+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:28.583537+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:29.583731+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:30.583913+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:31.584059+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:32.584229+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:33.584360+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:34.584551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:35.584701+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:36.584894+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:37.585075+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:38.585249+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:39.585360+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:40.585559+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:41.585686+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:42.585840+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:43.586082+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:44.586298+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:45.586425+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:46.586599+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:47.586769+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:48.586988+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:49.587168+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:50.587291+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:51.587442+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:52.587604+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:53.587791+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:54.587954+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:55.588086+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:56.588248+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:57.588405+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:58.588620+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:07:59.588807+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:00.589084+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:01.593204+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:02.593855+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:03.594153+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:04.595687+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:05.596200+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:06.598524+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:07.598962+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:08.599298+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:09.599647+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:10.599789+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:11.599997+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:12.600204+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:13.600437+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:14.600726+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:15.600956+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:16.601276+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:17.601592+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:18.601734+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:19.601906+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:20.602100+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:21.602394+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:22.602568+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:23.602695+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:24.602828+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:25.603158+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:26.603334+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:27.603566+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:28.603827+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:29.604056+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:30.604281+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:31.604449+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:32.604620+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:33.604768+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:34.604909+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:35.605055+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:36.605294+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:37.605498+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:38.605635+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:39.605823+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:40.605993+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:41.606227+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:42.606428+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:43.606670+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:44.606836+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:45.606994+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:46.607170+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:47.607369+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:48.607529+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:49.607725+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:50.607849+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:51.608017+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:52.608193+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:53.608315+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:54.608546+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:55.608664+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:56.608843+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:57.609011+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:58.609117+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:08:59.609272+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:00.609422+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:01.609577+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:02.609761+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:03.610017+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:04.610196+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:05.610585+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:06.610888+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:07.611671+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:08.612138+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:09.612653+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:10.613035+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:11.613201+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:12.613371+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:13.613528+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:14.614707+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:15.614865+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:16.615097+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:17.615281+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:18.615551+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:19.615764+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:20.615933+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:21.616135+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:22.616322+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:23.616581+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:24.616772+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:25.616992+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:26.617257+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:27.617522+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:28.617786+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:29.617963+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:30.618111+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:31.618260+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:32.618394+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:33.618548+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:34.618675+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:35.618893+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:36.619141+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:37.619277+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:38.619404+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:39.619523+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:40.619697+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:41.619881+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:42.620047+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:43.620184+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:44.620338+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:45.620524+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:46.620811+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98639872 unmapped: 12886016 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:47.620994+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 12877824 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:48.621172+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98648064 unmapped: 12877824 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: osd.0 170 heartbeat osd_stat(store_statfs(0x4fa800000/0x0/0x4ffc00000, data 0xd71c5f/0xe6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:49.621469+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98664448 unmapped: 12861440 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:50.621715+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: tick
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_tickets
Nov 22 06:10:22 compute-0 ceph-osd[89779]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T06:09:51.621888+0000)
Nov 22 06:10:22 compute-0 ceph-osd[89779]: prioritycache tune_memory target: 4294967296 mapped: 98631680 unmapped: 12894208 heap: 111525888 old mem: 2845415832 new mem: 2845415832
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 06:10:22 compute-0 ceph-osd[89779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 06:10:22 compute-0 ceph-osd[89779]: bluestore.MempoolThread(0x56464c4afb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204793 data_alloc: 218103808 data_used: 397312
Nov 22 06:10:22 compute-0 ceph-osd[89779]: do_command 'log dump' '{prefix=log dump}'
Nov 22 06:10:22 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 06:10:22 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14967 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:23 compute-0 ceph-mon[75840]: from='client.14955 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 06:10:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3498815368' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 06:10:23 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 06:10:23 compute-0 ceph-mon[75840]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 06:10:23 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3255712327' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 06:10:23 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 22 06:10:23 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306983851' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 06:10:23 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 22 06:10:23 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542135860' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 06:10:24 compute-0 ceph-mon[75840]: from='client.14967 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:24 compute-0 ceph-mon[75840]: pgmap v1529: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3306983851' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 06:10:24 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2542135860' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 06:10:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 22 06:10:24 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701759959' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 06:10:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:24 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 22 06:10:24 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863232465' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 06:10:25 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 06:10:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1701759959' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 06:10:25 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/863232465' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 06:10:25 compute-0 systemd[1]: Started Hostname Service.
Nov 22 06:10:25 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:25 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14977 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:25 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 22 06:10:25 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001152736' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 06:10:26 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 22 06:10:26 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114004736' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 06:10:26 compute-0 ceph-mon[75840]: pgmap v1530: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:26 compute-0 ceph-mon[75840]: from='client.14977 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3001152736' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 06:10:26 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3114004736' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 06:10:26 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14983 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 22 06:10:27 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468346310' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 06:10:27 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.267 255664 DEBUG oslo_service.periodic_task [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.327 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.327 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.327 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.327 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.328 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:10:27 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14987 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:27 compute-0 ceph-mon[75840]: from='client.14983 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:27 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/3468346310' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 06:10:27 compute-0 ceph-mon[75840]: pgmap v1531: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:27 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:10:27 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623510180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.776 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.969 255664 WARNING nova.virt.libvirt.driver [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.970 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4593MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.971 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 06:10:27 compute-0 nova_compute[255660]: 2025-11-22 06:10:27.971 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 06:10:27 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14991 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:28 compute-0 podman[293627]: 2025-11-22 06:10:28.19723982 +0000 UTC m=+0.055745421 container health_status 0f184b5eecdd1bb3708a24dce57654b8bcec6563cf93b8d878c5ac244f81e22c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 06:10:28 compute-0 podman[293628]: 2025-11-22 06:10:28.361354183 +0000 UTC m=+0.218454406 container health_status 90c029d6c77473d444315a20da8fc4a79db7bcb7e30ba854ba45d778b088047b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 06:10:28 compute-0 nova_compute[255660]: 2025-11-22 06:10:28.372 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 06:10:28 compute-0 nova_compute[255660]: 2025-11-22 06:10:28.372 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 06:10:28 compute-0 nova_compute[255660]: 2025-11-22 06:10:28.406 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 06:10:28 compute-0 ceph-mon[75840]: from='client.14987 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:28 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/623510180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:10:28 compute-0 ceph-mon[75840]: from='client.14991 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 22 06:10:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/195177224' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 06:10:28 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 06:10:28 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025611103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:10:29 compute-0 nova_compute[255660]: 2025-11-22 06:10:29.023 255664 DEBUG oslo_concurrency.processutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 06:10:29 compute-0 nova_compute[255660]: 2025-11-22 06:10:29.034 255664 DEBUG nova.compute.provider_tree [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed in ProviderTree for provider: 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 06:10:29 compute-0 nova_compute[255660]: 2025-11-22 06:10:29.145 255664 DEBUG nova.scheduler.client.report [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Inventory has not changed for provider 7a36ad86-8d7b-4adc-bf57-f66e1a8d4d60 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 06:10:29 compute-0 nova_compute[255660]: 2025-11-22 06:10:29.147 255664 DEBUG nova.compute.resource_tracker [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 06:10:29 compute-0 nova_compute[255660]: 2025-11-22 06:10:29.147 255664 DEBUG oslo_concurrency.lockutils [None req-bf99e480-2fe6-45ea-92de-8e84eee25744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 06:10:29 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 22 06:10:29 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726452377' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 06:10:29 compute-0 ceph-mon[75840]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 06:10:29 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14999 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/195177224' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/1025611103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mon[75840]: pgmap v1532: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:30 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/726452377' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.15001 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005018578661196848 of space, bias 4.0, pg target 0.6022294393436218 quantized to 16 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 06:10:30 compute-0 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 06:10:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 22 06:10:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958790790' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 06:10:30 compute-0 ceph-mon[75840]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 22 06:10:30 compute-0 ceph-mon[75840]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147878428' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mon[75840]: from='client.14999 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mon[75840]: from='client.15001 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/2958790790' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mon[75840]: from='client.? 192.168.122.100:0/4147878428' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 72 MiB data, 330 MiB used, 60 GiB / 60 GiB avail
Nov 22 06:10:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.15007 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 06:10:31 compute-0 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.15009 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
